I1223 12:56:11.618446 8 e2e.go:243] Starting e2e run "9734745c-763b-459f-b9da-f6dde306efad" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1577105770 - Will randomize all specs Will run 215 of 4412 specs Dec 23 12:56:12.078: INFO: >>> kubeConfig: /root/.kube/config Dec 23 12:56:12.082: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Dec 23 12:56:12.106: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Dec 23 12:56:12.144: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Dec 23 12:56:12.144: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Dec 23 12:56:12.144: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Dec 23 12:56:12.163: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Dec 23 12:56:12.163: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Dec 23 12:56:12.163: INFO: e2e test version: v1.15.7 Dec 23 12:56:12.165: INFO: kube-apiserver version: v1.15.1 SSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 12:56:12.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath Dec 23 12:56:12.303: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Dec 23 12:56:12.312: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5736" to be "success or failure" Dec 23 12:56:12.324: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 11.734329ms Dec 23 12:56:14.332: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020053094s Dec 23 12:56:16.342: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029962983s Dec 23 12:56:18.621: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.30883751s Dec 23 12:56:20.628: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.31657103s Dec 23 12:56:22.643: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 10.330836472s Dec 23 12:56:24.659: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.346918597s STEP: Saw pod success Dec 23 12:56:24.659: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Dec 23 12:56:24.664: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: STEP: delete the pod Dec 23 12:56:25.093: INFO: Waiting for pod pod-host-path-test to disappear Dec 23 12:56:25.144: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 12:56:25.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-5736" for this suite. Dec 23 12:56:31.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 12:56:31.446: INFO: namespace hostpath-5736 deletion completed in 6.295077126s • [SLOW TEST:19.282 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 12:56:31.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-00e41509-a57f-4434-839b-0b6f87e67f8e STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-00e41509-a57f-4434-839b-0b6f87e67f8e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 12:56:44.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5001" for this suite. Dec 23 12:57:08.133: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 12:57:08.386: INFO: namespace projected-5001 deletion completed in 24.278200297s • [SLOW TEST:36.939 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 12:57:08.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 12:57:08.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8926" for this suite. Dec 23 12:57:14.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 12:57:14.947: INFO: namespace kubelet-test-8926 deletion completed in 6.195233951s • [SLOW TEST:6.561 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 12:57:14.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W1223 12:57:31.383784 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 23 12:57:31.384: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 12:57:31.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3461" for this suite. Dec 23 12:57:55.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 12:57:55.967: INFO: namespace gc-3461 deletion completed in 24.573735792s • [SLOW TEST:41.019 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 12:57:55.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9691.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9691.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9691.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9691.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 23 12:58:12.332: INFO: File jessie_udp@dns-test-service-3.dns-9691.svc.cluster.local from pod dns-9691/dns-test-32112178-ca99-4e4f-903c-86841103f129 contains '' instead of 'foo.example.com.' Dec 23 12:58:12.333: INFO: Lookups using dns-9691/dns-test-32112178-ca99-4e4f-903c-86841103f129 failed for: [jessie_udp@dns-test-service-3.dns-9691.svc.cluster.local] Dec 23 12:58:17.366: INFO: DNS probes using dns-test-32112178-ca99-4e4f-903c-86841103f129 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9691.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9691.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9691.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9691.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 23 12:58:31.633: INFO: File wheezy_udp@dns-test-service-3.dns-9691.svc.cluster.local from pod dns-9691/dns-test-9fb462b4-a578-4729-91bd-cc061d29d883 contains '' instead of 'bar.example.com.' Dec 23 12:58:31.639: INFO: File jessie_udp@dns-test-service-3.dns-9691.svc.cluster.local from pod dns-9691/dns-test-9fb462b4-a578-4729-91bd-cc061d29d883 contains '' instead of 'bar.example.com.' Dec 23 12:58:31.639: INFO: Lookups using dns-9691/dns-test-9fb462b4-a578-4729-91bd-cc061d29d883 failed for: [wheezy_udp@dns-test-service-3.dns-9691.svc.cluster.local jessie_udp@dns-test-service-3.dns-9691.svc.cluster.local] Dec 23 12:58:36.671: INFO: File wheezy_udp@dns-test-service-3.dns-9691.svc.cluster.local from pod dns-9691/dns-test-9fb462b4-a578-4729-91bd-cc061d29d883 contains 'foo.example.com. ' instead of 'bar.example.com.' Dec 23 12:58:36.689: INFO: File jessie_udp@dns-test-service-3.dns-9691.svc.cluster.local from pod dns-9691/dns-test-9fb462b4-a578-4729-91bd-cc061d29d883 contains 'foo.example.com. ' instead of 'bar.example.com.' Dec 23 12:58:36.689: INFO: Lookups using dns-9691/dns-test-9fb462b4-a578-4729-91bd-cc061d29d883 failed for: [wheezy_udp@dns-test-service-3.dns-9691.svc.cluster.local jessie_udp@dns-test-service-3.dns-9691.svc.cluster.local] Dec 23 12:58:41.654: INFO: File wheezy_udp@dns-test-service-3.dns-9691.svc.cluster.local from pod dns-9691/dns-test-9fb462b4-a578-4729-91bd-cc061d29d883 contains 'foo.example.com. ' instead of 'bar.example.com.' Dec 23 12:58:41.660: INFO: File jessie_udp@dns-test-service-3.dns-9691.svc.cluster.local from pod dns-9691/dns-test-9fb462b4-a578-4729-91bd-cc061d29d883 contains 'foo.example.com. ' instead of 'bar.example.com.' Dec 23 12:58:41.660: INFO: Lookups using dns-9691/dns-test-9fb462b4-a578-4729-91bd-cc061d29d883 failed for: [wheezy_udp@dns-test-service-3.dns-9691.svc.cluster.local jessie_udp@dns-test-service-3.dns-9691.svc.cluster.local] Dec 23 12:58:46.656: INFO: DNS probes using dns-test-9fb462b4-a578-4729-91bd-cc061d29d883 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9691.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-9691.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9691.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-9691.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 23 12:59:00.995: INFO: File wheezy_udp@dns-test-service-3.dns-9691.svc.cluster.local from pod dns-9691/dns-test-c8be6ab9-3f4e-47c2-bdcd-1c387b106b89 contains '' instead of '10.104.126.120' Dec 23 12:59:01.000: INFO: File jessie_udp@dns-test-service-3.dns-9691.svc.cluster.local from pod dns-9691/dns-test-c8be6ab9-3f4e-47c2-bdcd-1c387b106b89 contains '' instead of '10.104.126.120' Dec 23 12:59:01.000: INFO: Lookups using dns-9691/dns-test-c8be6ab9-3f4e-47c2-bdcd-1c387b106b89 failed for: [wheezy_udp@dns-test-service-3.dns-9691.svc.cluster.local jessie_udp@dns-test-service-3.dns-9691.svc.cluster.local] Dec 23 12:59:06.088: INFO: File wheezy_udp@dns-test-service-3.dns-9691.svc.cluster.local from pod dns-9691/dns-test-c8be6ab9-3f4e-47c2-bdcd-1c387b106b89 contains '' instead of '10.104.126.120' Dec 23 12:59:06.103: INFO: Lookups using dns-9691/dns-test-c8be6ab9-3f4e-47c2-bdcd-1c387b106b89 failed for: [wheezy_udp@dns-test-service-3.dns-9691.svc.cluster.local] Dec 23 12:59:11.034: INFO: DNS probes using dns-test-c8be6ab9-3f4e-47c2-bdcd-1c387b106b89 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 12:59:11.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9691" for this suite. Dec 23 12:59:19.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 12:59:19.510: INFO: namespace dns-9691 deletion completed in 8.222614146s • [SLOW TEST:83.541 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 12:59:19.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Dec 23 12:59:19.674: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5183,SelfLink:/api/v1/namespaces/watch-5183/configmaps/e2e-watch-test-watch-closed,UID:4e12ea04-9974-4c77-aa94-264a8db118fe,ResourceVersion:17761370,Generation:0,CreationTimestamp:2019-12-23 12:59:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 23 12:59:19.674: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5183,SelfLink:/api/v1/namespaces/watch-5183/configmaps/e2e-watch-test-watch-closed,UID:4e12ea04-9974-4c77-aa94-264a8db118fe,ResourceVersion:17761371,Generation:0,CreationTimestamp:2019-12-23 12:59:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Dec 23 12:59:19.712: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5183,SelfLink:/api/v1/namespaces/watch-5183/configmaps/e2e-watch-test-watch-closed,UID:4e12ea04-9974-4c77-aa94-264a8db118fe,ResourceVersion:17761372,Generation:0,CreationTimestamp:2019-12-23 12:59:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 23 12:59:19.713: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5183,SelfLink:/api/v1/namespaces/watch-5183/configmaps/e2e-watch-test-watch-closed,UID:4e12ea04-9974-4c77-aa94-264a8db118fe,ResourceVersion:17761373,Generation:0,CreationTimestamp:2019-12-23 12:59:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 12:59:19.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5183" for this suite. Dec 23 12:59:25.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 12:59:26.029: INFO: namespace watch-5183 deletion completed in 6.281066083s • [SLOW TEST:6.519 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 12:59:26.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Dec 23 12:59:26.159: INFO: Pod name pod-release: Found 0 pods out of 1 Dec 23 12:59:31.167: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 12:59:32.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5840" for this suite. Dec 23 12:59:38.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 12:59:38.349: INFO: namespace replication-controller-5840 deletion completed in 6.133084223s • [SLOW TEST:12.319 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 12:59:38.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 23 12:59:38.463: INFO: Waiting up to 5m0s for pod "downwardapi-volume-673417d7-b536-4bbb-add5-8309ceaf0f3f" in namespace "projected-6060" to be "success or failure" Dec 23 12:59:38.501: INFO: Pod "downwardapi-volume-673417d7-b536-4bbb-add5-8309ceaf0f3f": Phase="Pending", Reason="", readiness=false. Elapsed: 37.711754ms Dec 23 12:59:40.528: INFO: Pod "downwardapi-volume-673417d7-b536-4bbb-add5-8309ceaf0f3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064424147s Dec 23 12:59:42.545: INFO: Pod "downwardapi-volume-673417d7-b536-4bbb-add5-8309ceaf0f3f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081928083s Dec 23 12:59:44.567: INFO: Pod "downwardapi-volume-673417d7-b536-4bbb-add5-8309ceaf0f3f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.103910988s Dec 23 12:59:46.585: INFO: Pod "downwardapi-volume-673417d7-b536-4bbb-add5-8309ceaf0f3f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.121779403s Dec 23 12:59:48.613: INFO: Pod "downwardapi-volume-673417d7-b536-4bbb-add5-8309ceaf0f3f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.14964898s Dec 23 12:59:50.629: INFO: Pod "downwardapi-volume-673417d7-b536-4bbb-add5-8309ceaf0f3f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.165764155s Dec 23 12:59:52.636: INFO: Pod "downwardapi-volume-673417d7-b536-4bbb-add5-8309ceaf0f3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.173248456s STEP: Saw pod success Dec 23 12:59:52.637: INFO: Pod "downwardapi-volume-673417d7-b536-4bbb-add5-8309ceaf0f3f" satisfied condition "success or failure" Dec 23 12:59:52.640: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-673417d7-b536-4bbb-add5-8309ceaf0f3f container client-container: STEP: delete the pod Dec 23 12:59:52.696: INFO: Waiting for pod downwardapi-volume-673417d7-b536-4bbb-add5-8309ceaf0f3f to disappear Dec 23 12:59:52.708: INFO: Pod downwardapi-volume-673417d7-b536-4bbb-add5-8309ceaf0f3f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 12:59:52.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6060" for this suite. Dec 23 12:59:58.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 12:59:58.979: INFO: namespace projected-6060 deletion completed in 6.264213044s • [SLOW TEST:20.630 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 12:59:58.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 23 12:59:59.195: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ee62fe85-a3c0-4f16-9dbf-69aef036ca16" in namespace "downward-api-2160" to be "success or failure" Dec 23 12:59:59.229: INFO: Pod "downwardapi-volume-ee62fe85-a3c0-4f16-9dbf-69aef036ca16": Phase="Pending", Reason="", readiness=false. Elapsed: 33.324098ms Dec 23 13:00:01.241: INFO: Pod "downwardapi-volume-ee62fe85-a3c0-4f16-9dbf-69aef036ca16": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045283693s Dec 23 13:00:03.249: INFO: Pod "downwardapi-volume-ee62fe85-a3c0-4f16-9dbf-69aef036ca16": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053931729s Dec 23 13:00:05.265: INFO: Pod "downwardapi-volume-ee62fe85-a3c0-4f16-9dbf-69aef036ca16": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069504041s Dec 23 13:00:07.288: INFO: Pod "downwardapi-volume-ee62fe85-a3c0-4f16-9dbf-69aef036ca16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.092674164s STEP: Saw pod success Dec 23 13:00:07.289: INFO: Pod "downwardapi-volume-ee62fe85-a3c0-4f16-9dbf-69aef036ca16" satisfied condition "success or failure" Dec 23 13:00:07.297: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-ee62fe85-a3c0-4f16-9dbf-69aef036ca16 container client-container: STEP: delete the pod Dec 23 13:00:07.365: INFO: Waiting for pod downwardapi-volume-ee62fe85-a3c0-4f16-9dbf-69aef036ca16 to disappear Dec 23 13:00:07.373: INFO: Pod downwardapi-volume-ee62fe85-a3c0-4f16-9dbf-69aef036ca16 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:00:07.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2160" for this suite. Dec 23 13:00:13.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:00:13.628: INFO: namespace downward-api-2160 deletion completed in 6.248176941s • [SLOW TEST:14.648 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:00:13.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Dec 23 13:00:13.699: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:00:29.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9728" for this suite. Dec 23 13:00:35.137: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:00:35.304: INFO: namespace init-container-9728 deletion completed in 6.202701486s • [SLOW TEST:21.674 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:00:35.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-55ca8709-c2c9-4f8d-9570-47841c583aaf in namespace container-probe-7132 Dec 23 13:00:43.442: INFO: Started pod liveness-55ca8709-c2c9-4f8d-9570-47841c583aaf in namespace container-probe-7132 STEP: checking the pod's current state and verifying that restartCount is present Dec 23 13:00:43.450: INFO: Initial restart count of pod liveness-55ca8709-c2c9-4f8d-9570-47841c583aaf is 0 Dec 23 13:01:05.602: INFO: Restart count of pod container-probe-7132/liveness-55ca8709-c2c9-4f8d-9570-47841c583aaf is now 1 (22.151920512s elapsed) Dec 23 13:01:25.737: INFO: Restart count of pod container-probe-7132/liveness-55ca8709-c2c9-4f8d-9570-47841c583aaf is now 2 (42.286902237s elapsed) Dec 23 13:01:45.893: INFO: Restart count of pod container-probe-7132/liveness-55ca8709-c2c9-4f8d-9570-47841c583aaf is now 3 (1m2.443008996s elapsed) Dec 23 13:02:04.015: INFO: Restart count of pod container-probe-7132/liveness-55ca8709-c2c9-4f8d-9570-47841c583aaf is now 4 (1m20.564781027s elapsed) Dec 23 13:03:04.479: INFO: Restart count of pod container-probe-7132/liveness-55ca8709-c2c9-4f8d-9570-47841c583aaf is now 5 (2m21.029303602s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:03:04.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7132" for this suite. Dec 23 13:03:10.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:03:10.760: INFO: namespace container-probe-7132 deletion completed in 6.164986007s • [SLOW TEST:155.457 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:03:10.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 23 13:03:10.863: INFO: Waiting up to 5m0s for pod "downwardapi-volume-91bf0a69-cf55-4fbf-bd24-7c7461568fd8" in namespace "projected-714" to be "success or failure" Dec 23 13:03:10.867: INFO: Pod "downwardapi-volume-91bf0a69-cf55-4fbf-bd24-7c7461568fd8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.232525ms Dec 23 13:03:12.878: INFO: Pod "downwardapi-volume-91bf0a69-cf55-4fbf-bd24-7c7461568fd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015325207s Dec 23 13:03:14.902: INFO: Pod "downwardapi-volume-91bf0a69-cf55-4fbf-bd24-7c7461568fd8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038968786s Dec 23 13:03:16.914: INFO: Pod "downwardapi-volume-91bf0a69-cf55-4fbf-bd24-7c7461568fd8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051326222s Dec 23 13:03:18.931: INFO: Pod "downwardapi-volume-91bf0a69-cf55-4fbf-bd24-7c7461568fd8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.068512382s Dec 23 13:03:20.944: INFO: Pod "downwardapi-volume-91bf0a69-cf55-4fbf-bd24-7c7461568fd8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.080836226s Dec 23 13:03:22.951: INFO: Pod "downwardapi-volume-91bf0a69-cf55-4fbf-bd24-7c7461568fd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.088536414s STEP: Saw pod success Dec 23 13:03:22.952: INFO: Pod "downwardapi-volume-91bf0a69-cf55-4fbf-bd24-7c7461568fd8" satisfied condition "success or failure" Dec 23 13:03:22.954: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-91bf0a69-cf55-4fbf-bd24-7c7461568fd8 container client-container: STEP: delete the pod Dec 23 13:03:23.116: INFO: Waiting for pod downwardapi-volume-91bf0a69-cf55-4fbf-bd24-7c7461568fd8 to disappear Dec 23 13:03:23.135: INFO: Pod downwardapi-volume-91bf0a69-cf55-4fbf-bd24-7c7461568fd8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:03:23.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-714" for this suite. Dec 23 13:03:29.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:03:29.314: INFO: namespace projected-714 deletion completed in 6.171627609s • [SLOW TEST:18.553 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:03:29.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Dec 23 13:03:29.479: INFO: Waiting up to 5m0s for pod "pod-f7e02e23-c70a-4e22-a73b-d84028604402" in namespace "emptydir-5350" to be "success or failure" Dec 23 13:03:29.490: INFO: Pod "pod-f7e02e23-c70a-4e22-a73b-d84028604402": Phase="Pending", Reason="", readiness=false. Elapsed: 10.070134ms Dec 23 13:03:31.504: INFO: Pod "pod-f7e02e23-c70a-4e22-a73b-d84028604402": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024918576s Dec 23 13:03:33.515: INFO: Pod "pod-f7e02e23-c70a-4e22-a73b-d84028604402": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035885608s Dec 23 13:03:35.788: INFO: Pod "pod-f7e02e23-c70a-4e22-a73b-d84028604402": Phase="Pending", Reason="", readiness=false. Elapsed: 6.308986778s Dec 23 13:03:37.798: INFO: Pod "pod-f7e02e23-c70a-4e22-a73b-d84028604402": Phase="Pending", Reason="", readiness=false. Elapsed: 8.318648573s Dec 23 13:03:39.810: INFO: Pod "pod-f7e02e23-c70a-4e22-a73b-d84028604402": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.33045584s STEP: Saw pod success Dec 23 13:03:39.810: INFO: Pod "pod-f7e02e23-c70a-4e22-a73b-d84028604402" satisfied condition "success or failure" Dec 23 13:03:39.819: INFO: Trying to get logs from node iruya-node pod pod-f7e02e23-c70a-4e22-a73b-d84028604402 container test-container: STEP: delete the pod Dec 23 13:03:40.011: INFO: Waiting for pod pod-f7e02e23-c70a-4e22-a73b-d84028604402 to disappear Dec 23 13:03:40.092: INFO: Pod pod-f7e02e23-c70a-4e22-a73b-d84028604402 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:03:40.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5350" for this suite. Dec 23 13:03:46.129: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:03:46.347: INFO: namespace emptydir-5350 deletion completed in 6.243511938s • [SLOW TEST:17.032 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:03:46.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Dec 23 13:03:46.487: INFO: Waiting up to 5m0s for pod "downward-api-1ff681b8-19f0-46f3-9087-bda3ae8cf496" in namespace "downward-api-5713" to be "success or failure" Dec 23 13:03:46.497: INFO: Pod "downward-api-1ff681b8-19f0-46f3-9087-bda3ae8cf496": Phase="Pending", Reason="", readiness=false. Elapsed: 9.316333ms Dec 23 13:03:48.509: INFO: Pod "downward-api-1ff681b8-19f0-46f3-9087-bda3ae8cf496": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02191657s Dec 23 13:03:50.527: INFO: Pod "downward-api-1ff681b8-19f0-46f3-9087-bda3ae8cf496": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039038594s Dec 23 13:03:52.551: INFO: Pod "downward-api-1ff681b8-19f0-46f3-9087-bda3ae8cf496": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063321288s Dec 23 13:03:54.568: INFO: Pod "downward-api-1ff681b8-19f0-46f3-9087-bda3ae8cf496": Phase="Pending", Reason="", readiness=false. Elapsed: 8.08080015s Dec 23 13:03:56.581: INFO: Pod "downward-api-1ff681b8-19f0-46f3-9087-bda3ae8cf496": Phase="Pending", Reason="", readiness=false. Elapsed: 10.0937702s Dec 23 13:03:58.595: INFO: Pod "downward-api-1ff681b8-19f0-46f3-9087-bda3ae8cf496": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.107989372s STEP: Saw pod success Dec 23 13:03:58.596: INFO: Pod "downward-api-1ff681b8-19f0-46f3-9087-bda3ae8cf496" satisfied condition "success or failure" Dec 23 13:03:58.604: INFO: Trying to get logs from node iruya-node pod downward-api-1ff681b8-19f0-46f3-9087-bda3ae8cf496 container dapi-container: STEP: delete the pod Dec 23 13:03:58.686: INFO: Waiting for pod downward-api-1ff681b8-19f0-46f3-9087-bda3ae8cf496 to disappear Dec 23 13:03:58.695: INFO: Pod downward-api-1ff681b8-19f0-46f3-9087-bda3ae8cf496 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:03:58.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5713" for this suite. Dec 23 13:04:04.947: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:04:05.029: INFO: namespace downward-api-5713 deletion completed in 6.323427283s • [SLOW TEST:18.681 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:04:05.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W1223 13:04:46.620077 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 23 13:04:46.620: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:04:46.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6072" for this suite. Dec 23 13:04:55.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:04:56.935: INFO: namespace gc-6072 deletion completed in 10.309066147s • [SLOW TEST:51.905 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:04:56.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-a14556d2-7b70-4979-b377-10c069d4407f STEP: Creating a pod to test consume secrets Dec 23 13:04:57.452: INFO: Waiting up to 5m0s for pod "pod-secrets-05306067-2b2f-4067-8a73-c1545f8938b7" in namespace "secrets-213" to be "success or failure" Dec 23 13:04:57.490: INFO: Pod "pod-secrets-05306067-2b2f-4067-8a73-c1545f8938b7": Phase="Pending", Reason="", readiness=false. Elapsed: 37.262233ms Dec 23 13:05:00.993: INFO: Pod "pod-secrets-05306067-2b2f-4067-8a73-c1545f8938b7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.539889209s Dec 23 13:05:03.724: INFO: Pod "pod-secrets-05306067-2b2f-4067-8a73-c1545f8938b7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.271299658s Dec 23 13:05:05.735: INFO: Pod "pod-secrets-05306067-2b2f-4067-8a73-c1545f8938b7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.282227027s Dec 23 13:05:07.744: INFO: Pod "pod-secrets-05306067-2b2f-4067-8a73-c1545f8938b7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.291355829s Dec 23 13:05:09.756: INFO: Pod "pod-secrets-05306067-2b2f-4067-8a73-c1545f8938b7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.303192434s Dec 23 13:05:11.773: INFO: Pod "pod-secrets-05306067-2b2f-4067-8a73-c1545f8938b7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.319636782s Dec 23 13:05:13.885: INFO: Pod "pod-secrets-05306067-2b2f-4067-8a73-c1545f8938b7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.431569547s Dec 23 13:05:15.897: INFO: Pod "pod-secrets-05306067-2b2f-4067-8a73-c1545f8938b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.444075476s STEP: Saw pod success Dec 23 13:05:15.897: INFO: Pod "pod-secrets-05306067-2b2f-4067-8a73-c1545f8938b7" satisfied condition "success or failure" Dec 23 13:05:15.905: INFO: Trying to get logs from node iruya-node pod pod-secrets-05306067-2b2f-4067-8a73-c1545f8938b7 container secret-volume-test: STEP: delete the pod Dec 23 13:05:16.042: INFO: Waiting for pod pod-secrets-05306067-2b2f-4067-8a73-c1545f8938b7 to disappear Dec 23 13:05:16.096: INFO: Pod pod-secrets-05306067-2b2f-4067-8a73-c1545f8938b7 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:05:16.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-213" for this suite. Dec 23 13:05:22.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:05:22.309: INFO: namespace secrets-213 deletion completed in 6.203689686s • [SLOW TEST:25.373 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:05:22.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-d9073ac9-1d5d-434c-b6d9-3d6bafff0a1e STEP: Creating a pod to test consume secrets Dec 23 13:05:22.568: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f1794614-2d6e-4e4f-a9e1-6a9ac356a914" in namespace "projected-8770" to be "success or failure" Dec 23 13:05:22.589: INFO: Pod "pod-projected-secrets-f1794614-2d6e-4e4f-a9e1-6a9ac356a914": Phase="Pending", Reason="", readiness=false. Elapsed: 21.178171ms Dec 23 13:05:24.599: INFO: Pod "pod-projected-secrets-f1794614-2d6e-4e4f-a9e1-6a9ac356a914": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031160513s Dec 23 13:05:26.615: INFO: Pod "pod-projected-secrets-f1794614-2d6e-4e4f-a9e1-6a9ac356a914": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046439246s Dec 23 13:05:28.641: INFO: Pod "pod-projected-secrets-f1794614-2d6e-4e4f-a9e1-6a9ac356a914": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07282009s Dec 23 13:05:30.663: INFO: Pod "pod-projected-secrets-f1794614-2d6e-4e4f-a9e1-6a9ac356a914": Phase="Pending", Reason="", readiness=false. Elapsed: 8.095069191s Dec 23 13:05:32.696: INFO: Pod "pod-projected-secrets-f1794614-2d6e-4e4f-a9e1-6a9ac356a914": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.127431629s STEP: Saw pod success Dec 23 13:05:32.696: INFO: Pod "pod-projected-secrets-f1794614-2d6e-4e4f-a9e1-6a9ac356a914" satisfied condition "success or failure" Dec 23 13:05:32.700: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-f1794614-2d6e-4e4f-a9e1-6a9ac356a914 container projected-secret-volume-test: STEP: delete the pod Dec 23 13:05:32.755: INFO: Waiting for pod pod-projected-secrets-f1794614-2d6e-4e4f-a9e1-6a9ac356a914 to disappear Dec 23 13:05:32.766: INFO: Pod pod-projected-secrets-f1794614-2d6e-4e4f-a9e1-6a9ac356a914 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:05:32.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8770" for this suite. Dec 23 13:05:38.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:05:39.071: INFO: namespace projected-8770 deletion completed in 6.295367318s • [SLOW TEST:16.758 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:05:39.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:05:44.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2222" for this suite. Dec 23 13:05:51.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:05:51.160: INFO: namespace watch-2222 deletion completed in 6.205643242s • [SLOW TEST:12.088 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:05:51.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 23 13:05:51.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-5019' Dec 23 13:05:59.143: INFO: stderr: "" Dec 23 13:05:59.143: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Dec 23 13:06:09.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-5019 -o json' Dec 23 13:06:09.788: INFO: stderr: "" Dec 23 13:06:09.788: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-12-23T13:05:59Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-5019\",\n \"resourceVersion\": \"17762506\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-5019/pods/e2e-test-nginx-pod\",\n \"uid\": \"9b7b8887-8121-4d66-8138-5c6f0f28cd15\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-zw7xg\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-node\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-zw7xg\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-zw7xg\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-12-23T13:05:59Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-12-23T13:06:07Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-12-23T13:06:07Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-12-23T13:05:59Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://48a3691d66ac9d691d67970e045b88fd664f76e8746dd2b82874e6e9578f66bb\",\n \"image\": \"nginx:1.14-alpine\",\n \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2019-12-23T13:06:06Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.96.3.65\",\n \"phase\": \"Running\",\n \"podIP\": \"10.44.0.1\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2019-12-23T13:05:59Z\"\n }\n}\n" STEP: replace the image in the pod Dec 23 13:06:09.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-5019' Dec 23 13:06:10.406: INFO: stderr: "" Dec 23 13:06:10.406: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Dec 23 13:06:10.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-5019' Dec 23 13:06:19.624: INFO: stderr: "" Dec 23 13:06:19.624: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:06:19.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5019" for this suite. Dec 23 13:06:25.701: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:06:25.874: INFO: namespace kubectl-5019 deletion completed in 6.236525436s • [SLOW TEST:34.712 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:06:25.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-3c655bc2-2528-4e04-a429-1cdba3f9f364 STEP: Creating a pod to test consume secrets Dec 23 13:06:26.133: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-36c95aa7-30d3-4b41-8102-33fedbf74304" in namespace "projected-1742" to be "success or failure" Dec 23 13:06:26.138: INFO: Pod "pod-projected-secrets-36c95aa7-30d3-4b41-8102-33fedbf74304": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113225ms Dec 23 13:06:28.147: INFO: Pod "pod-projected-secrets-36c95aa7-30d3-4b41-8102-33fedbf74304": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012984976s Dec 23 13:06:30.162: INFO: Pod "pod-projected-secrets-36c95aa7-30d3-4b41-8102-33fedbf74304": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028817176s Dec 23 13:06:32.173: INFO: Pod "pod-projected-secrets-36c95aa7-30d3-4b41-8102-33fedbf74304": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039129981s Dec 23 13:06:34.905: INFO: Pod "pod-projected-secrets-36c95aa7-30d3-4b41-8102-33fedbf74304": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.771546498s STEP: Saw pod success Dec 23 13:06:34.905: INFO: Pod "pod-projected-secrets-36c95aa7-30d3-4b41-8102-33fedbf74304" satisfied condition "success or failure" Dec 23 13:06:34.920: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-36c95aa7-30d3-4b41-8102-33fedbf74304 container secret-volume-test: STEP: delete the pod Dec 23 13:06:34.995: INFO: Waiting for pod pod-projected-secrets-36c95aa7-30d3-4b41-8102-33fedbf74304 to disappear Dec 23 13:06:35.001: INFO: Pod pod-projected-secrets-36c95aa7-30d3-4b41-8102-33fedbf74304 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:06:35.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1742" for this suite. Dec 23 13:06:41.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:06:41.228: INFO: namespace projected-1742 deletion completed in 6.148582283s • [SLOW TEST:15.352 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:06:41.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:06:49.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-453" for this suite. Dec 23 13:06:55.888: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:06:55.986: INFO: namespace emptydir-wrapper-453 deletion completed in 6.230355457s • [SLOW TEST:14.758 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:06:55.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 23 13:06:56.197: INFO: Waiting up to 5m0s for pod "downwardapi-volume-19756380-c9ec-4d55-aeb4-2d1769a00d65" in namespace "downward-api-7519" to be "success or failure" Dec 23 13:06:56.219: INFO: Pod "downwardapi-volume-19756380-c9ec-4d55-aeb4-2d1769a00d65": Phase="Pending", Reason="", readiness=false. Elapsed: 22.286157ms Dec 23 13:06:58.228: INFO: Pod "downwardapi-volume-19756380-c9ec-4d55-aeb4-2d1769a00d65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030794965s Dec 23 13:07:00.283: INFO: Pod "downwardapi-volume-19756380-c9ec-4d55-aeb4-2d1769a00d65": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086126405s Dec 23 13:07:02.295: INFO: Pod "downwardapi-volume-19756380-c9ec-4d55-aeb4-2d1769a00d65": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097968326s Dec 23 13:07:04.303: INFO: Pod "downwardapi-volume-19756380-c9ec-4d55-aeb4-2d1769a00d65": Phase="Pending", Reason="", readiness=false. Elapsed: 8.10643299s Dec 23 13:07:06.318: INFO: Pod "downwardapi-volume-19756380-c9ec-4d55-aeb4-2d1769a00d65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.121275241s STEP: Saw pod success Dec 23 13:07:06.318: INFO: Pod "downwardapi-volume-19756380-c9ec-4d55-aeb4-2d1769a00d65" satisfied condition "success or failure" Dec 23 13:07:06.323: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-19756380-c9ec-4d55-aeb4-2d1769a00d65 container client-container: STEP: delete the pod Dec 23 13:07:06.390: INFO: Waiting for pod downwardapi-volume-19756380-c9ec-4d55-aeb4-2d1769a00d65 to disappear Dec 23 13:07:06.401: INFO: Pod downwardapi-volume-19756380-c9ec-4d55-aeb4-2d1769a00d65 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:07:06.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7519" for this suite. Dec 23 13:07:12.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:07:12.618: INFO: namespace downward-api-7519 deletion completed in 6.205538115s • [SLOW TEST:16.632 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:07:12.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-4zhc STEP: Creating a pod to test atomic-volume-subpath Dec 23 13:07:12.832: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-4zhc" in namespace "subpath-1397" to be "success or failure" Dec 23 13:07:12.841: INFO: Pod "pod-subpath-test-configmap-4zhc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.50789ms Dec 23 13:07:14.865: INFO: Pod "pod-subpath-test-configmap-4zhc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032459728s Dec 23 13:07:16.874: INFO: Pod "pod-subpath-test-configmap-4zhc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041399661s Dec 23 13:07:18.882: INFO: Pod "pod-subpath-test-configmap-4zhc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049445939s Dec 23 13:07:20.893: INFO: Pod "pod-subpath-test-configmap-4zhc": Phase="Running", Reason="", readiness=true. Elapsed: 8.059997214s Dec 23 13:07:22.904: INFO: Pod "pod-subpath-test-configmap-4zhc": Phase="Running", Reason="", readiness=true. Elapsed: 10.071514561s Dec 23 13:07:24.910: INFO: Pod "pod-subpath-test-configmap-4zhc": Phase="Running", Reason="", readiness=true. Elapsed: 12.077707787s Dec 23 13:07:26.984: INFO: Pod "pod-subpath-test-configmap-4zhc": Phase="Running", Reason="", readiness=true. Elapsed: 14.151118841s Dec 23 13:07:28.997: INFO: Pod "pod-subpath-test-configmap-4zhc": Phase="Running", Reason="", readiness=true. Elapsed: 16.164070565s Dec 23 13:07:31.005: INFO: Pod "pod-subpath-test-configmap-4zhc": Phase="Running", Reason="", readiness=true. Elapsed: 18.172092857s Dec 23 13:07:33.011: INFO: Pod "pod-subpath-test-configmap-4zhc": Phase="Running", Reason="", readiness=true. Elapsed: 20.178673625s Dec 23 13:07:35.022: INFO: Pod "pod-subpath-test-configmap-4zhc": Phase="Running", Reason="", readiness=true. Elapsed: 22.18898823s Dec 23 13:07:37.031: INFO: Pod "pod-subpath-test-configmap-4zhc": Phase="Running", Reason="", readiness=true. Elapsed: 24.198598689s Dec 23 13:07:39.056: INFO: Pod "pod-subpath-test-configmap-4zhc": Phase="Running", Reason="", readiness=true. Elapsed: 26.223533496s Dec 23 13:07:41.076: INFO: Pod "pod-subpath-test-configmap-4zhc": Phase="Running", Reason="", readiness=true. Elapsed: 28.243513593s Dec 23 13:07:43.085: INFO: Pod "pod-subpath-test-configmap-4zhc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.252641987s STEP: Saw pod success Dec 23 13:07:43.085: INFO: Pod "pod-subpath-test-configmap-4zhc" satisfied condition "success or failure" Dec 23 13:07:43.089: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-4zhc container test-container-subpath-configmap-4zhc: STEP: delete the pod Dec 23 13:07:43.314: INFO: Waiting for pod pod-subpath-test-configmap-4zhc to disappear Dec 23 13:07:43.324: INFO: Pod pod-subpath-test-configmap-4zhc no longer exists STEP: Deleting pod pod-subpath-test-configmap-4zhc Dec 23 13:07:43.325: INFO: Deleting pod "pod-subpath-test-configmap-4zhc" in namespace "subpath-1397" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:07:43.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1397" for this suite. Dec 23 13:07:49.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:07:49.525: INFO: namespace subpath-1397 deletion completed in 6.189494231s • [SLOW TEST:36.905 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:07:49.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9383.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9383.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9383.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9383.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9383.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9383.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9383.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9383.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9383.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9383.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9383.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9383.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9383.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 208.167.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.167.208_udp@PTR;check="$$(dig +tcp +noall +answer +search 208.167.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.167.208_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9383.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9383.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9383.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9383.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9383.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9383.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9383.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9383.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9383.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9383.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9383.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9383.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9383.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 208.167.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.167.208_udp@PTR;check="$$(dig +tcp +noall +answer +search 208.167.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.167.208_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 23 13:08:05.976: INFO: Unable to read jessie_udp@dns-test-service.dns-9383.svc.cluster.local from pod dns-9383/dns-test-44d560d3-e9ad-41a9-9ab3-d90241972e76: the server could not find the requested resource (get pods dns-test-44d560d3-e9ad-41a9-9ab3-d90241972e76) Dec 23 13:08:06.002: INFO: Unable to read jessie_tcp@dns-test-service.dns-9383.svc.cluster.local from pod dns-9383/dns-test-44d560d3-e9ad-41a9-9ab3-d90241972e76: the server could not find the requested resource (get pods dns-test-44d560d3-e9ad-41a9-9ab3-d90241972e76) Dec 23 13:08:06.022: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9383.svc.cluster.local from pod dns-9383/dns-test-44d560d3-e9ad-41a9-9ab3-d90241972e76: the server could not find the requested resource (get pods dns-test-44d560d3-e9ad-41a9-9ab3-d90241972e76) Dec 23 13:08:06.036: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9383.svc.cluster.local from pod dns-9383/dns-test-44d560d3-e9ad-41a9-9ab3-d90241972e76: the server could not find the requested resource (get pods dns-test-44d560d3-e9ad-41a9-9ab3-d90241972e76) Dec 23 13:08:06.051: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-9383.svc.cluster.local from pod dns-9383/dns-test-44d560d3-e9ad-41a9-9ab3-d90241972e76: the server could not find the requested resource (get pods dns-test-44d560d3-e9ad-41a9-9ab3-d90241972e76) Dec 23 13:08:06.065: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-9383.svc.cluster.local from pod dns-9383/dns-test-44d560d3-e9ad-41a9-9ab3-d90241972e76: the server could not find the requested resource (get pods dns-test-44d560d3-e9ad-41a9-9ab3-d90241972e76) Dec 23 13:08:06.073: INFO: Unable to read jessie_udp@PodARecord from pod dns-9383/dns-test-44d560d3-e9ad-41a9-9ab3-d90241972e76: the server could not find the requested resource (get pods dns-test-44d560d3-e9ad-41a9-9ab3-d90241972e76) Dec 23 13:08:06.085: INFO: Unable to read jessie_tcp@PodARecord from pod dns-9383/dns-test-44d560d3-e9ad-41a9-9ab3-d90241972e76: the server could not find the requested resource (get pods dns-test-44d560d3-e9ad-41a9-9ab3-d90241972e76) Dec 23 13:08:06.105: INFO: Lookups using dns-9383/dns-test-44d560d3-e9ad-41a9-9ab3-d90241972e76 failed for: [jessie_udp@dns-test-service.dns-9383.svc.cluster.local jessie_tcp@dns-test-service.dns-9383.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9383.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9383.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-9383.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-9383.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord] Dec 23 13:08:11.317: INFO: DNS probes using dns-9383/dns-test-44d560d3-e9ad-41a9-9ab3-d90241972e76 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:08:11.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9383" for this suite. Dec 23 13:08:17.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:08:17.959: INFO: namespace dns-9383 deletion completed in 6.211057819s • [SLOW TEST:28.433 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:08:17.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-8d7c8f72-534a-4ed4-9e69-c198b8f1a33c STEP: Creating a pod to test consume configMaps Dec 23 13:08:18.057: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8501d978-f055-4510-a837-f853ebc92238" in namespace "projected-210" to be "success or failure" Dec 23 13:08:18.182: INFO: Pod "pod-projected-configmaps-8501d978-f055-4510-a837-f853ebc92238": Phase="Pending", Reason="", readiness=false. Elapsed: 125.153942ms Dec 23 13:08:20.192: INFO: Pod "pod-projected-configmaps-8501d978-f055-4510-a837-f853ebc92238": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134582097s Dec 23 13:08:22.217: INFO: Pod "pod-projected-configmaps-8501d978-f055-4510-a837-f853ebc92238": Phase="Pending", Reason="", readiness=false. Elapsed: 4.159670039s Dec 23 13:08:24.227: INFO: Pod "pod-projected-configmaps-8501d978-f055-4510-a837-f853ebc92238": Phase="Pending", Reason="", readiness=false. Elapsed: 6.170074819s Dec 23 13:08:26.300: INFO: Pod "pod-projected-configmaps-8501d978-f055-4510-a837-f853ebc92238": Phase="Pending", Reason="", readiness=false. Elapsed: 8.242884645s Dec 23 13:08:28.311: INFO: Pod "pod-projected-configmaps-8501d978-f055-4510-a837-f853ebc92238": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.253805727s STEP: Saw pod success Dec 23 13:08:28.311: INFO: Pod "pod-projected-configmaps-8501d978-f055-4510-a837-f853ebc92238" satisfied condition "success or failure" Dec 23 13:08:28.317: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-8501d978-f055-4510-a837-f853ebc92238 container projected-configmap-volume-test: STEP: delete the pod Dec 23 13:08:28.358: INFO: Waiting for pod pod-projected-configmaps-8501d978-f055-4510-a837-f853ebc92238 to disappear Dec 23 13:08:28.363: INFO: Pod pod-projected-configmaps-8501d978-f055-4510-a837-f853ebc92238 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:08:28.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-210" for this suite. Dec 23 13:08:36.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:08:36.616: INFO: namespace projected-210 deletion completed in 8.247044003s • [SLOW TEST:18.657 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:08:36.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-23065c3c-fab0-448a-b95d-464b840a9789 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:08:46.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-311" for this suite. Dec 23 13:09:11.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:09:11.857: INFO: namespace configmap-311 deletion completed in 25.048300155s • [SLOW TEST:35.241 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:09:11.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-9nbt STEP: Creating a pod to test atomic-volume-subpath Dec 23 13:09:12.067: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-9nbt" in namespace "subpath-557" to be "success or failure" Dec 23 13:09:12.129: INFO: Pod "pod-subpath-test-downwardapi-9nbt": Phase="Pending", Reason="", readiness=false. Elapsed: 61.252481ms Dec 23 13:09:14.135: INFO: Pod "pod-subpath-test-downwardapi-9nbt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067410443s Dec 23 13:09:16.151: INFO: Pod "pod-subpath-test-downwardapi-9nbt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083874827s Dec 23 13:09:18.158: INFO: Pod "pod-subpath-test-downwardapi-9nbt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090292775s Dec 23 13:09:20.169: INFO: Pod "pod-subpath-test-downwardapi-9nbt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.101768005s Dec 23 13:09:22.180: INFO: Pod "pod-subpath-test-downwardapi-9nbt": Phase="Pending", Reason="", readiness=false. Elapsed: 10.112081921s Dec 23 13:09:24.187: INFO: Pod "pod-subpath-test-downwardapi-9nbt": Phase="Running", Reason="", readiness=true. Elapsed: 12.119497795s Dec 23 13:09:26.204: INFO: Pod "pod-subpath-test-downwardapi-9nbt": Phase="Running", Reason="", readiness=true. Elapsed: 14.136768176s Dec 23 13:09:28.212: INFO: Pod "pod-subpath-test-downwardapi-9nbt": Phase="Running", Reason="", readiness=true. Elapsed: 16.144203359s Dec 23 13:09:30.220: INFO: Pod "pod-subpath-test-downwardapi-9nbt": Phase="Running", Reason="", readiness=true. Elapsed: 18.152780969s Dec 23 13:09:32.228: INFO: Pod "pod-subpath-test-downwardapi-9nbt": Phase="Running", Reason="", readiness=true. Elapsed: 20.160210522s Dec 23 13:09:34.244: INFO: Pod "pod-subpath-test-downwardapi-9nbt": Phase="Running", Reason="", readiness=true. Elapsed: 22.176586372s Dec 23 13:09:36.258: INFO: Pod "pod-subpath-test-downwardapi-9nbt": Phase="Running", Reason="", readiness=true. Elapsed: 24.190956639s Dec 23 13:09:38.270: INFO: Pod "pod-subpath-test-downwardapi-9nbt": Phase="Running", Reason="", readiness=true. Elapsed: 26.202020111s Dec 23 13:09:40.375: INFO: Pod "pod-subpath-test-downwardapi-9nbt": Phase="Running", Reason="", readiness=true. Elapsed: 28.30781573s Dec 23 13:09:42.388: INFO: Pod "pod-subpath-test-downwardapi-9nbt": Phase="Running", Reason="", readiness=true. Elapsed: 30.320547813s Dec 23 13:09:44.401: INFO: Pod "pod-subpath-test-downwardapi-9nbt": Phase="Running", Reason="", readiness=true. Elapsed: 32.33319951s Dec 23 13:09:46.411: INFO: Pod "pod-subpath-test-downwardapi-9nbt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.343173108s STEP: Saw pod success Dec 23 13:09:46.411: INFO: Pod "pod-subpath-test-downwardapi-9nbt" satisfied condition "success or failure" Dec 23 13:09:46.415: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-9nbt container test-container-subpath-downwardapi-9nbt: STEP: delete the pod Dec 23 13:09:46.702: INFO: Waiting for pod pod-subpath-test-downwardapi-9nbt to disappear Dec 23 13:09:46.720: INFO: Pod pod-subpath-test-downwardapi-9nbt no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-9nbt Dec 23 13:09:46.720: INFO: Deleting pod "pod-subpath-test-downwardapi-9nbt" in namespace "subpath-557" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:09:46.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-557" for this suite. Dec 23 13:09:52.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:09:52.960: INFO: namespace subpath-557 deletion completed in 6.196626707s • [SLOW TEST:41.102 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:09:52.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:10:23.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7530" for this suite. Dec 23 13:10:31.388: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:10:31.525: INFO: namespace namespaces-7530 deletion completed in 8.159232103s STEP: Destroying namespace "nsdeletetest-1473" for this suite. Dec 23 13:10:31.528: INFO: Namespace nsdeletetest-1473 was already deleted STEP: Destroying namespace "nsdeletetest-432" for this suite. Dec 23 13:10:37.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:10:37.828: INFO: namespace nsdeletetest-432 deletion completed in 6.300255354s • [SLOW TEST:44.867 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:10:37.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Dec 23 13:10:37.932: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Dec 23 13:10:38.791: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Dec 23 13:10:41.111: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712703438, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712703438, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712703438, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712703438, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 23 13:10:43.120: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712703438, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712703438, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712703438, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712703438, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 23 13:10:45.125: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712703438, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712703438, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712703438, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712703438, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 23 13:10:47.121: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712703438, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712703438, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712703438, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712703438, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 23 13:10:49.121: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712703438, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712703438, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712703438, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712703438, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 23 13:10:55.159: INFO: Waited 4.003720941s for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:10:56.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-8193" for this suite. Dec 23 13:11:02.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:11:02.643: INFO: namespace aggregator-8193 deletion completed in 6.194395029s • [SLOW TEST:24.814 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:11:02.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Dec 23 13:11:02.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3644' Dec 23 13:11:03.322: INFO: stderr: "" Dec 23 13:11:03.323: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Dec 23 13:11:04.344: INFO: Selector matched 1 pods for map[app:redis] Dec 23 13:11:04.345: INFO: Found 0 / 1 Dec 23 13:11:05.337: INFO: Selector matched 1 pods for map[app:redis] Dec 23 13:11:05.338: INFO: Found 0 / 1 Dec 23 13:11:06.356: INFO: Selector matched 1 pods for map[app:redis] Dec 23 13:11:06.356: INFO: Found 0 / 1 Dec 23 13:11:07.334: INFO: Selector matched 1 pods for map[app:redis] Dec 23 13:11:07.334: INFO: Found 0 / 1 Dec 23 13:11:08.336: INFO: Selector matched 1 pods for map[app:redis] Dec 23 13:11:08.337: INFO: Found 0 / 1 Dec 23 13:11:09.335: INFO: Selector matched 1 pods for map[app:redis] Dec 23 13:11:09.336: INFO: Found 0 / 1 Dec 23 13:11:10.333: INFO: Selector matched 1 pods for map[app:redis] Dec 23 13:11:10.333: INFO: Found 0 / 1 Dec 23 13:11:11.334: INFO: Selector matched 1 pods for map[app:redis] Dec 23 13:11:11.335: INFO: Found 1 / 1 Dec 23 13:11:11.335: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Dec 23 13:11:11.340: INFO: Selector matched 1 pods for map[app:redis] Dec 23 13:11:11.340: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Dec 23 13:11:11.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-mmq5l redis-master --namespace=kubectl-3644' Dec 23 13:11:11.540: INFO: stderr: "" Dec 23 13:11:11.540: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 23 Dec 13:11:09.344 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 23 Dec 13:11:09.344 # Server started, Redis version 3.2.12\n1:M 23 Dec 13:11:09.345 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 23 Dec 13:11:09.345 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Dec 23 13:11:11.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-mmq5l redis-master --namespace=kubectl-3644 --tail=1' Dec 23 13:11:11.739: INFO: stderr: "" Dec 23 13:11:11.739: INFO: stdout: "1:M 23 Dec 13:11:09.345 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Dec 23 13:11:11.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-mmq5l redis-master --namespace=kubectl-3644 --limit-bytes=1' Dec 23 13:11:11.947: INFO: stderr: "" Dec 23 13:11:11.948: INFO: stdout: " " STEP: exposing timestamps Dec 23 13:11:11.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-mmq5l redis-master --namespace=kubectl-3644 --tail=1 --timestamps' Dec 23 13:11:12.122: INFO: stderr: "" Dec 23 13:11:12.123: INFO: stdout: "2019-12-23T13:11:09.345447841Z 1:M 23 Dec 13:11:09.345 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Dec 23 13:11:14.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-mmq5l redis-master --namespace=kubectl-3644 --since=1s' Dec 23 13:11:14.850: INFO: stderr: "" Dec 23 13:11:14.851: INFO: stdout: "" Dec 23 13:11:14.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-mmq5l redis-master --namespace=kubectl-3644 --since=24h' Dec 23 13:11:15.030: INFO: stderr: "" Dec 23 13:11:15.030: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 23 Dec 13:11:09.344 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 23 Dec 13:11:09.344 # Server started, Redis version 3.2.12\n1:M 23 Dec 13:11:09.345 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 23 Dec 13:11:09.345 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Dec 23 13:11:15.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3644' Dec 23 13:11:15.152: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 23 13:11:15.152: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Dec 23 13:11:15.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-3644' Dec 23 13:11:15.299: INFO: stderr: "No resources found.\n" Dec 23 13:11:15.299: INFO: stdout: "" Dec 23 13:11:15.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-3644 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 23 13:11:15.471: INFO: stderr: "" Dec 23 13:11:15.471: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:11:15.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3644" for this suite. Dec 23 13:11:21.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:11:21.666: INFO: namespace kubectl-3644 deletion completed in 6.15574596s • [SLOW TEST:19.023 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:11:21.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-8708 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-8708 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8708 Dec 23 13:11:21.844: INFO: Found 0 stateful pods, waiting for 1 Dec 23 13:11:31.863: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Dec 23 13:11:31.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8708 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 23 13:11:32.436: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 23 13:11:32.437: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 23 13:11:32.437: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 23 13:11:32.489: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Dec 23 13:11:42.506: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 23 13:11:42.506: INFO: Waiting for statefulset status.replicas updated to 0 Dec 23 13:11:42.562: INFO: POD NODE PHASE GRACE CONDITIONS Dec 23 13:11:42.562: INFO: ss-0 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:21 +0000 UTC }] Dec 23 13:11:42.563: INFO: Dec 23 13:11:42.563: INFO: StatefulSet ss has not reached scale 3, at 1 Dec 23 13:11:43.577: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.970917481s Dec 23 13:11:44.598: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.956270993s Dec 23 13:11:45.612: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.935644564s Dec 23 13:11:46.631: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.921990056s Dec 23 13:11:48.919: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.902805518s Dec 23 13:11:50.581: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.614017241s Dec 23 13:11:51.592: INFO: Verifying statefulset ss doesn't scale past 3 for another 952.359371ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8708 Dec 23 13:11:52.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8708 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 13:11:53.146: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Dec 23 13:11:53.146: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 23 13:11:53.146: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 23 13:11:53.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8708 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 13:11:53.676: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Dec 23 13:11:53.676: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 23 13:11:53.676: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 23 13:11:53.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8708 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 13:11:54.567: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Dec 23 13:11:54.567: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 23 13:11:54.567: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 23 13:11:54.585: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 23 13:11:54.585: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Dec 23 13:11:54.585: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Dec 23 13:11:54.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8708 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 23 13:11:55.311: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 23 13:11:55.311: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 23 13:11:55.311: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 23 13:11:55.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8708 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 23 13:11:55.725: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 23 13:11:55.725: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 23 13:11:55.725: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 23 13:11:55.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8708 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 23 13:11:56.144: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 23 13:11:56.144: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 23 13:11:56.144: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 23 13:11:56.144: INFO: Waiting for statefulset status.replicas updated to 0 Dec 23 13:11:56.158: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Dec 23 13:12:06.174: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 23 13:12:06.174: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Dec 23 13:12:06.174: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Dec 23 13:12:06.195: INFO: POD NODE PHASE GRACE CONDITIONS Dec 23 13:12:06.195: INFO: ss-0 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:21 +0000 UTC }] Dec 23 13:12:06.196: INFO: ss-1 iruya-server-sfge57q7djm7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:42 +0000 UTC }] Dec 23 13:12:06.196: INFO: ss-2 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:42 +0000 UTC }] Dec 23 13:12:06.196: INFO: Dec 23 13:12:06.196: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 23 13:12:07.996: INFO: POD NODE PHASE GRACE CONDITIONS Dec 23 13:12:07.997: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:21 +0000 UTC }] Dec 23 13:12:07.997: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:42 +0000 UTC }] Dec 23 13:12:07.997: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:42 +0000 UTC }] Dec 23 13:12:07.998: INFO: Dec 23 13:12:07.998: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 23 13:12:09.043: INFO: POD NODE PHASE GRACE CONDITIONS Dec 23 13:12:09.043: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:21 +0000 UTC }] Dec 23 13:12:09.044: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:42 +0000 UTC }] Dec 23 13:12:09.044: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:42 +0000 UTC }] Dec 23 13:12:09.044: INFO: Dec 23 13:12:09.044: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 23 13:12:10.425: INFO: POD NODE PHASE GRACE CONDITIONS Dec 23 13:12:10.425: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:21 +0000 UTC }] Dec 23 13:12:10.426: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:42 +0000 UTC }] Dec 23 13:12:10.426: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:42 +0000 UTC }] Dec 23 13:12:10.426: INFO: Dec 23 13:12:10.426: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 23 13:12:11.443: INFO: POD NODE PHASE GRACE CONDITIONS Dec 23 13:12:11.443: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:21 +0000 UTC }] Dec 23 13:12:11.443: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:42 +0000 UTC }] Dec 23 13:12:11.443: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:42 +0000 UTC }] Dec 23 13:12:11.443: INFO: Dec 23 13:12:11.443: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 23 13:12:12.464: INFO: POD NODE PHASE GRACE CONDITIONS Dec 23 13:12:12.464: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:21 +0000 UTC }] Dec 23 13:12:12.464: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:42 +0000 UTC }] Dec 23 13:12:12.464: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:42 +0000 UTC }] Dec 23 13:12:12.464: INFO: Dec 23 13:12:12.464: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 23 13:12:13.478: INFO: POD NODE PHASE GRACE CONDITIONS Dec 23 13:12:13.478: INFO: ss-0 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:21 +0000 UTC }] Dec 23 13:12:13.478: INFO: Dec 23 13:12:13.478: INFO: StatefulSet ss has not reached scale 0, at 1 Dec 23 13:12:14.525: INFO: POD NODE PHASE GRACE CONDITIONS Dec 23 13:12:14.525: INFO: ss-0 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:21 +0000 UTC }] Dec 23 13:12:14.526: INFO: Dec 23 13:12:14.526: INFO: StatefulSet ss has not reached scale 0, at 1 Dec 23 13:12:15.536: INFO: POD NODE PHASE GRACE CONDITIONS Dec 23 13:12:15.536: INFO: ss-0 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 13:11:21 +0000 UTC }] Dec 23 13:12:15.536: INFO: Dec 23 13:12:15.536: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8708 Dec 23 13:12:16.632: INFO: Scaling statefulset ss to 0 Dec 23 13:12:16.662: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Dec 23 13:12:16.666: INFO: Deleting all statefulset in ns statefulset-8708 Dec 23 13:12:16.670: INFO: Scaling statefulset ss to 0 Dec 23 13:12:16.681: INFO: Waiting for statefulset status.replicas updated to 0 Dec 23 13:12:16.683: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:12:16.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8708" for this suite. Dec 23 13:12:22.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:12:22.893: INFO: namespace statefulset-8708 deletion completed in 6.176667708s • [SLOW TEST:61.226 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:12:22.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Dec 23 13:12:23.013: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2790,SelfLink:/api/v1/namespaces/watch-2790/configmaps/e2e-watch-test-configmap-a,UID:f65f24a9-97fd-43c3-9ec2-80f77d630df5,ResourceVersion:17763598,Generation:0,CreationTimestamp:2019-12-23 13:12:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 23 13:12:23.014: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2790,SelfLink:/api/v1/namespaces/watch-2790/configmaps/e2e-watch-test-configmap-a,UID:f65f24a9-97fd-43c3-9ec2-80f77d630df5,ResourceVersion:17763598,Generation:0,CreationTimestamp:2019-12-23 13:12:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Dec 23 13:12:33.029: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2790,SelfLink:/api/v1/namespaces/watch-2790/configmaps/e2e-watch-test-configmap-a,UID:f65f24a9-97fd-43c3-9ec2-80f77d630df5,ResourceVersion:17763612,Generation:0,CreationTimestamp:2019-12-23 13:12:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Dec 23 13:12:33.030: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2790,SelfLink:/api/v1/namespaces/watch-2790/configmaps/e2e-watch-test-configmap-a,UID:f65f24a9-97fd-43c3-9ec2-80f77d630df5,ResourceVersion:17763612,Generation:0,CreationTimestamp:2019-12-23 13:12:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Dec 23 13:12:43.041: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2790,SelfLink:/api/v1/namespaces/watch-2790/configmaps/e2e-watch-test-configmap-a,UID:f65f24a9-97fd-43c3-9ec2-80f77d630df5,ResourceVersion:17763627,Generation:0,CreationTimestamp:2019-12-23 13:12:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 23 13:12:43.042: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2790,SelfLink:/api/v1/namespaces/watch-2790/configmaps/e2e-watch-test-configmap-a,UID:f65f24a9-97fd-43c3-9ec2-80f77d630df5,ResourceVersion:17763627,Generation:0,CreationTimestamp:2019-12-23 13:12:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Dec 23 13:12:53.058: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2790,SelfLink:/api/v1/namespaces/watch-2790/configmaps/e2e-watch-test-configmap-a,UID:f65f24a9-97fd-43c3-9ec2-80f77d630df5,ResourceVersion:17763641,Generation:0,CreationTimestamp:2019-12-23 13:12:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 23 13:12:53.059: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2790,SelfLink:/api/v1/namespaces/watch-2790/configmaps/e2e-watch-test-configmap-a,UID:f65f24a9-97fd-43c3-9ec2-80f77d630df5,ResourceVersion:17763641,Generation:0,CreationTimestamp:2019-12-23 13:12:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Dec 23 13:13:03.074: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2790,SelfLink:/api/v1/namespaces/watch-2790/configmaps/e2e-watch-test-configmap-b,UID:81c2c93f-09c8-4b05-a708-8e64633447aa,ResourceVersion:17763655,Generation:0,CreationTimestamp:2019-12-23 13:13:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 23 13:13:03.075: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2790,SelfLink:/api/v1/namespaces/watch-2790/configmaps/e2e-watch-test-configmap-b,UID:81c2c93f-09c8-4b05-a708-8e64633447aa,ResourceVersion:17763655,Generation:0,CreationTimestamp:2019-12-23 13:13:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Dec 23 13:13:13.084: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2790,SelfLink:/api/v1/namespaces/watch-2790/configmaps/e2e-watch-test-configmap-b,UID:81c2c93f-09c8-4b05-a708-8e64633447aa,ResourceVersion:17763669,Generation:0,CreationTimestamp:2019-12-23 13:13:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 23 13:13:13.084: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2790,SelfLink:/api/v1/namespaces/watch-2790/configmaps/e2e-watch-test-configmap-b,UID:81c2c93f-09c8-4b05-a708-8e64633447aa,ResourceVersion:17763669,Generation:0,CreationTimestamp:2019-12-23 13:13:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:13:23.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2790" for this suite. Dec 23 13:13:29.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:13:29.261: INFO: namespace watch-2790 deletion completed in 6.169142778s • [SLOW TEST:66.368 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:13:29.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:14:19.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9005" for this suite. Dec 23 13:14:25.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:14:26.069: INFO: namespace container-runtime-9005 deletion completed in 6.194895627s • [SLOW TEST:56.806 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:14:26.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:14:35.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9056" for this suite. Dec 23 13:14:59.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:14:59.385: INFO: namespace replication-controller-9056 deletion completed in 24.146308201s • [SLOW TEST:33.315 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:14:59.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 23 13:14:59.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-839' Dec 23 13:14:59.764: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 23 13:14:59.764: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Dec 23 13:14:59.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-839' Dec 23 13:15:00.126: INFO: stderr: "" Dec 23 13:15:00.126: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:15:00.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-839" for this suite. Dec 23 13:15:06.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:15:06.286: INFO: namespace kubectl-839 deletion completed in 6.14169256s • [SLOW TEST:6.901 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:15:06.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:15:16.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2588" for this suite. Dec 23 13:16:08.690: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:16:08.834: INFO: namespace kubelet-test-2588 deletion completed in 52.171626983s • [SLOW TEST:62.545 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:16:08.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 23 13:16:08.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8962' Dec 23 13:16:11.942: INFO: stderr: "" Dec 23 13:16:11.942: INFO: stdout: "replicationcontroller/redis-master created\n" Dec 23 13:16:11.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8962' Dec 23 13:16:12.827: INFO: stderr: "" Dec 23 13:16:12.827: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Dec 23 13:16:13.838: INFO: Selector matched 1 pods for map[app:redis] Dec 23 13:16:13.838: INFO: Found 0 / 1 Dec 23 13:16:14.839: INFO: Selector matched 1 pods for map[app:redis] Dec 23 13:16:14.839: INFO: Found 0 / 1 Dec 23 13:16:15.870: INFO: Selector matched 1 pods for map[app:redis] Dec 23 13:16:15.871: INFO: Found 0 / 1 Dec 23 13:16:16.863: INFO: Selector matched 1 pods for map[app:redis] Dec 23 13:16:16.863: INFO: Found 0 / 1 Dec 23 13:16:17.838: INFO: Selector matched 1 pods for map[app:redis] Dec 23 13:16:17.838: INFO: Found 0 / 1 Dec 23 13:16:18.837: INFO: Selector matched 1 pods for map[app:redis] Dec 23 13:16:18.837: INFO: Found 0 / 1 Dec 23 13:16:19.842: INFO: Selector matched 1 pods for map[app:redis] Dec 23 13:16:19.842: INFO: Found 1 / 1 Dec 23 13:16:19.842: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Dec 23 13:16:19.849: INFO: Selector matched 1 pods for map[app:redis] Dec 23 13:16:19.849: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Dec 23 13:16:19.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-q52qz --namespace=kubectl-8962' Dec 23 13:16:20.071: INFO: stderr: "" Dec 23 13:16:20.071: INFO: stdout: "Name: redis-master-q52qz\nNamespace: kubectl-8962\nPriority: 0\nNode: iruya-node/10.96.3.65\nStart Time: Mon, 23 Dec 2019 13:16:12 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.44.0.1\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: docker://d852d2e64ef06e376677cfb322654d976cfc580c9dc7cc466ac9cf7005d4c297\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 23 Dec 2019 13:16:18 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-pj5r5 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-pj5r5:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-pj5r5\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 8s default-scheduler Successfully assigned kubectl-8962/redis-master-q52qz to iruya-node\n Normal Pulled 4s kubelet, iruya-node Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, iruya-node Created container redis-master\n Normal Started 2s kubelet, iruya-node Started container redis-master\n" Dec 23 13:16:20.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-8962' Dec 23 13:16:20.259: INFO: stderr: "" Dec 23 13:16:20.259: INFO: stdout: "Name: redis-master\nNamespace: kubectl-8962\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 9s replication-controller Created pod: redis-master-q52qz\n" Dec 23 13:16:20.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-8962' Dec 23 13:16:20.403: INFO: stderr: "" Dec 23 13:16:20.403: INFO: stdout: "Name: redis-master\nNamespace: kubectl-8962\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.100.40.248\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.44.0.1:6379\nSession Affinity: None\nEvents: \n" Dec 23 13:16:20.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node' Dec 23 13:16:20.559: INFO: stderr: "" Dec 23 13:16:20.560: INFO: stdout: "Name: iruya-node\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-node\n kubernetes.io/os=linux\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 04 Aug 2019 09:01:39 +0000\nTaints: \nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Sat, 12 Oct 2019 11:56:49 +0000 Sat, 12 Oct 2019 11:56:49 +0000 WeaveIsUp Weave pod has set this\n MemoryPressure False Mon, 23 Dec 2019 13:15:41 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 23 Dec 2019 13:15:41 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 23 Dec 2019 13:15:41 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 23 Dec 2019 13:15:41 +0000 Sun, 04 Aug 2019 09:02:19 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled\nAddresses:\n InternalIP: 10.96.3.65\n Hostname: iruya-node\nCapacity:\n cpu: 4\n ephemeral-storage: 20145724Ki\n hugepages-2Mi: 0\n memory: 4039076Ki\n pods: 110\nAllocatable:\n cpu: 4\n ephemeral-storage: 18566299208\n hugepages-2Mi: 0\n memory: 3936676Ki\n pods: 110\nSystem Info:\n Machine ID: f573dcf04d6f4a87856a35d266a2fa7a\n System UUID: F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID: 8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version: 4.15.0-52-generic\n OS Image: Ubuntu 18.04.2 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://18.9.7\n Kubelet Version: v1.15.1\n Kube-Proxy Version: v1.15.1\nPodCIDR: 10.96.1.0/24\nNon-terminated Pods: (3 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system kube-proxy-976zl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 141d\n kube-system weave-net-rlp57 20m (0%) 0 (0%) 0 (0%) 0 (0%) 72d\n kubectl-8962 redis-master-q52qz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9s\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 20m (0%) 0 (0%)\n memory 0 (0%) 0 (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Dec 23 13:16:20.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-8962' Dec 23 13:16:20.742: INFO: stderr: "" Dec 23 13:16:20.742: INFO: stdout: "Name: kubectl-8962\nLabels: e2e-framework=kubectl\n e2e-run=9734745c-763b-459f-b9da-f6dde306efad\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:16:20.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8962" for this suite. Dec 23 13:16:36.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:16:36.938: INFO: namespace kubectl-8962 deletion completed in 16.184278936s • [SLOW TEST:28.104 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:16:36.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Dec 23 13:16:43.540: INFO: 0 pods remaining Dec 23 13:16:43.540: INFO: 0 pods has nil DeletionTimestamp Dec 23 13:16:43.540: INFO: STEP: Gathering metrics W1223 13:16:44.524353 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 23 13:16:44.524: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:16:44.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8352" for this suite. Dec 23 13:16:54.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:16:54.836: INFO: namespace gc-8352 deletion completed in 10.304501186s • [SLOW TEST:17.897 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:16:54.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-6254 STEP: creating a selector STEP: Creating the service pods in kubernetes Dec 23 13:16:54.990: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Dec 23 13:17:33.813: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6254 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 23 13:17:33.813: INFO: >>> kubeConfig: /root/.kube/config Dec 23 13:17:35.357: INFO: Found all expected endpoints: [netserver-0] Dec 23 13:17:35.368: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6254 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 23 13:17:35.368: INFO: >>> kubeConfig: /root/.kube/config Dec 23 13:17:36.730: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:17:36.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6254" for this suite. Dec 23 13:18:00.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:18:00.861: INFO: namespace pod-network-test-6254 deletion completed in 24.120432607s • [SLOW TEST:66.024 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:18:00.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Dec 23 13:18:09.490: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2086 pod-service-account-b7e8f734-18bd-4c62-8cf9-78a7deb7d823 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Dec 23 13:18:10.087: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2086 pod-service-account-b7e8f734-18bd-4c62-8cf9-78a7deb7d823 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Dec 23 13:18:10.827: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2086 pod-service-account-b7e8f734-18bd-4c62-8cf9-78a7deb7d823 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:18:11.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2086" for this suite. Dec 23 13:18:19.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:18:19.438: INFO: namespace svcaccounts-2086 deletion completed in 8.12816084s • [SLOW TEST:18.577 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:18:19.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-8f0ea8fe-98fc-4763-b4fe-2f0381bd9fc4 STEP: Creating a pod to test consume configMaps Dec 23 13:18:19.629: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-998475db-0620-40ac-8547-1da519f3de22" in namespace "projected-6342" to be "success or failure" Dec 23 13:18:19.682: INFO: Pod "pod-projected-configmaps-998475db-0620-40ac-8547-1da519f3de22": Phase="Pending", Reason="", readiness=false. Elapsed: 53.362006ms Dec 23 13:18:21.689: INFO: Pod "pod-projected-configmaps-998475db-0620-40ac-8547-1da519f3de22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059843339s Dec 23 13:18:23.707: INFO: Pod "pod-projected-configmaps-998475db-0620-40ac-8547-1da519f3de22": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078366521s Dec 23 13:18:25.716: INFO: Pod "pod-projected-configmaps-998475db-0620-40ac-8547-1da519f3de22": Phase="Pending", Reason="", readiness=false. Elapsed: 6.087217782s Dec 23 13:18:27.730: INFO: Pod "pod-projected-configmaps-998475db-0620-40ac-8547-1da519f3de22": Phase="Pending", Reason="", readiness=false. Elapsed: 8.100833946s Dec 23 13:18:29.740: INFO: Pod "pod-projected-configmaps-998475db-0620-40ac-8547-1da519f3de22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.1112604s STEP: Saw pod success Dec 23 13:18:29.740: INFO: Pod "pod-projected-configmaps-998475db-0620-40ac-8547-1da519f3de22" satisfied condition "success or failure" Dec 23 13:18:29.744: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-998475db-0620-40ac-8547-1da519f3de22 container projected-configmap-volume-test: STEP: delete the pod Dec 23 13:18:29.834: INFO: Waiting for pod pod-projected-configmaps-998475db-0620-40ac-8547-1da519f3de22 to disappear Dec 23 13:18:29.876: INFO: Pod pod-projected-configmaps-998475db-0620-40ac-8547-1da519f3de22 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:18:29.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6342" for this suite. Dec 23 13:18:35.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:18:36.058: INFO: namespace projected-6342 deletion completed in 6.173735474s • [SLOW TEST:16.619 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:18:36.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Dec 23 13:18:44.843: INFO: Successfully updated pod "annotationupdateb4ac4a22-9d7e-48e9-addd-ad36d1eac55e" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:18:46.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2860" for this suite. Dec 23 13:19:08.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:19:09.068: INFO: namespace projected-2860 deletion completed in 22.132795061s • [SLOW TEST:33.010 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:19:09.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-2d42cf8a-d1bf-4e4e-91c7-b7c63749f456 Dec 23 13:19:09.203: INFO: Pod name my-hostname-basic-2d42cf8a-d1bf-4e4e-91c7-b7c63749f456: Found 0 pods out of 1 Dec 23 13:19:14.213: INFO: Pod name my-hostname-basic-2d42cf8a-d1bf-4e4e-91c7-b7c63749f456: Found 1 pods out of 1 Dec 23 13:19:14.213: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-2d42cf8a-d1bf-4e4e-91c7-b7c63749f456" are running Dec 23 13:19:18.227: INFO: Pod "my-hostname-basic-2d42cf8a-d1bf-4e4e-91c7-b7c63749f456-ss6fj" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-23 13:19:09 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-23 13:19:09 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-2d42cf8a-d1bf-4e4e-91c7-b7c63749f456]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-23 13:19:09 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-2d42cf8a-d1bf-4e4e-91c7-b7c63749f456]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-23 13:19:09 +0000 UTC Reason: Message:}]) Dec 23 13:19:18.227: INFO: Trying to dial the pod Dec 23 13:19:23.289: INFO: Controller my-hostname-basic-2d42cf8a-d1bf-4e4e-91c7-b7c63749f456: Got expected result from replica 1 [my-hostname-basic-2d42cf8a-d1bf-4e4e-91c7-b7c63749f456-ss6fj]: "my-hostname-basic-2d42cf8a-d1bf-4e4e-91c7-b7c63749f456-ss6fj", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:19:23.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-347" for this suite. Dec 23 13:19:29.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:19:29.515: INFO: namespace replication-controller-347 deletion completed in 6.218130984s • [SLOW TEST:20.447 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:19:29.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Dec 23 13:19:29.652: INFO: Number of nodes with available pods: 0 Dec 23 13:19:29.652: INFO: Node iruya-node is running more than one daemon pod Dec 23 13:19:30.763: INFO: Number of nodes with available pods: 0 Dec 23 13:19:30.763: INFO: Node iruya-node is running more than one daemon pod Dec 23 13:19:31.700: INFO: Number of nodes with available pods: 0 Dec 23 13:19:31.700: INFO: Node iruya-node is running more than one daemon pod Dec 23 13:19:32.765: INFO: Number of nodes with available pods: 0 Dec 23 13:19:32.766: INFO: Node iruya-node is running more than one daemon pod Dec 23 13:19:33.670: INFO: Number of nodes with available pods: 0 Dec 23 13:19:33.671: INFO: Node iruya-node is running more than one daemon pod Dec 23 13:19:35.028: INFO: Number of nodes with available pods: 0 Dec 23 13:19:35.028: INFO: Node iruya-node is running more than one daemon pod Dec 23 13:19:35.676: INFO: Number of nodes with available pods: 0 Dec 23 13:19:35.676: INFO: Node iruya-node is running more than one daemon pod Dec 23 13:19:36.773: INFO: Number of nodes with available pods: 0 Dec 23 13:19:36.773: INFO: Node iruya-node is running more than one daemon pod Dec 23 13:19:37.683: INFO: Number of nodes with available pods: 0 Dec 23 13:19:37.683: INFO: Node iruya-node is running more than one daemon pod Dec 23 13:19:38.690: INFO: Number of nodes with available pods: 0 Dec 23 13:19:38.690: INFO: Node iruya-node is running more than one daemon pod Dec 23 13:19:39.672: INFO: Number of nodes with available pods: 1 Dec 23 13:19:39.672: INFO: Node iruya-node is running more than one daemon pod Dec 23 13:19:40.675: INFO: Number of nodes with available pods: 2 Dec 23 13:19:40.675: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Dec 23 13:19:40.706: INFO: Number of nodes with available pods: 1 Dec 23 13:19:40.706: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 23 13:19:41.727: INFO: Number of nodes with available pods: 1 Dec 23 13:19:41.727: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 23 13:19:42.759: INFO: Number of nodes with available pods: 1 Dec 23 13:19:42.759: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 23 13:19:43.729: INFO: Number of nodes with available pods: 1 Dec 23 13:19:43.729: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 23 13:19:44.731: INFO: Number of nodes with available pods: 1 Dec 23 13:19:44.731: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 23 13:19:45.722: INFO: Number of nodes with available pods: 1 Dec 23 13:19:45.722: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 23 13:19:46.790: INFO: Number of nodes with available pods: 1 Dec 23 13:19:46.790: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 23 13:19:47.731: INFO: Number of nodes with available pods: 1 Dec 23 13:19:47.731: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 23 13:19:48.923: INFO: Number of nodes with available pods: 1 Dec 23 13:19:48.923: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 23 13:19:49.900: INFO: Number of nodes with available pods: 1 Dec 23 13:19:49.901: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 23 13:19:50.843: INFO: Number of nodes with available pods: 1 Dec 23 13:19:50.844: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 23 13:19:51.730: INFO: Number of nodes with available pods: 1 Dec 23 13:19:51.730: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 23 13:19:52.802: INFO: Number of nodes with available pods: 1 Dec 23 13:19:52.802: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 23 13:19:53.839: INFO: Number of nodes with available pods: 1 Dec 23 13:19:53.840: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 23 13:19:54.726: INFO: Number of nodes with available pods: 2 Dec 23 13:19:54.726: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5334, will wait for the garbage collector to delete the pods Dec 23 13:19:54.852: INFO: Deleting DaemonSet.extensions daemon-set took: 69.142845ms Dec 23 13:19:55.153: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.945888ms Dec 23 13:20:06.563: INFO: Number of nodes with available pods: 0 Dec 23 13:20:06.564: INFO: Number of running nodes: 0, number of available pods: 0 Dec 23 13:20:06.576: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5334/daemonsets","resourceVersion":"17764742"},"items":null} Dec 23 13:20:06.586: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5334/pods","resourceVersion":"17764742"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:20:06.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5334" for this suite. Dec 23 13:20:12.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:20:12.793: INFO: namespace daemonsets-5334 deletion completed in 6.160738687s • [SLOW TEST:43.277 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:20:12.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 23 13:20:12.941: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Dec 23 13:20:12.973: INFO: Number of nodes with available pods: 0 Dec 23 13:20:12.973: INFO: Node iruya-node is running more than one daemon pod Dec 23 13:20:14.011: INFO: Number of nodes with available pods: 0 Dec 23 13:20:14.011: INFO: Node iruya-node is running more than one daemon pod Dec 23 13:20:16.015: INFO: Number of nodes with available pods: 0 Dec 23 13:20:16.015: INFO: Node iruya-node is running more than one daemon pod Dec 23 13:20:17.010: INFO: Number of nodes with available pods: 0 Dec 23 13:20:17.010: INFO: Node iruya-node is running more than one daemon pod Dec 23 13:20:18.006: INFO: Number of nodes with available pods: 0 Dec 23 13:20:18.006: INFO: Node iruya-node is running more than one daemon pod Dec 23 13:20:20.110: INFO: Number of nodes with available pods: 0 Dec 23 13:20:20.110: INFO: Node iruya-node is running more than one daemon pod Dec 23 13:20:21.275: INFO: Number of nodes with available pods: 0 Dec 23 13:20:21.275: INFO: Node iruya-node is running more than one daemon pod Dec 23 13:20:22.045: INFO: Number of nodes with available pods: 0 Dec 23 13:20:22.045: INFO: Node iruya-node is running more than one daemon pod Dec 23 13:20:22.991: INFO: Number of nodes with available pods: 2 Dec 23 13:20:22.991: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Dec 23 13:20:23.176: INFO: Wrong image for pod: daemon-set-8ffm9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 23 13:20:23.176: INFO: Wrong image for pod: daemon-set-g8lvv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 23 13:20:24.238: INFO: Wrong image for pod: daemon-set-8ffm9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 23 13:20:24.238: INFO: Wrong image for pod: daemon-set-g8lvv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 23 13:20:25.246: INFO: Wrong image for pod: daemon-set-8ffm9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 23 13:20:25.246: INFO: Wrong image for pod: daemon-set-g8lvv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 23 13:20:26.240: INFO: Wrong image for pod: daemon-set-8ffm9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 23 13:20:26.240: INFO: Wrong image for pod: daemon-set-g8lvv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 23 13:20:27.239: INFO: Wrong image for pod: daemon-set-8ffm9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 23 13:20:27.239: INFO: Wrong image for pod: daemon-set-g8lvv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 23 13:20:28.240: INFO: Wrong image for pod: daemon-set-8ffm9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 23 13:20:28.240: INFO: Wrong image for pod: daemon-set-g8lvv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 23 13:20:29.238: INFO: Wrong image for pod: daemon-set-8ffm9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 23 13:20:29.238: INFO: Pod daemon-set-8ffm9 is not available Dec 23 13:20:29.238: INFO: Wrong image for pod: daemon-set-g8lvv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 23 13:20:30.257: INFO: Wrong image for pod: daemon-set-g8lvv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 23 13:20:30.257: INFO: Pod daemon-set-sx2hh is not available Dec 23 13:20:31.346: INFO: Wrong image for pod: daemon-set-g8lvv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 23 13:20:31.346: INFO: Pod daemon-set-sx2hh is not available Dec 23 13:20:32.243: INFO: Wrong image for pod: daemon-set-g8lvv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 23 13:20:32.243: INFO: Pod daemon-set-sx2hh is not available Dec 23 13:20:33.236: INFO: Wrong image for pod: daemon-set-g8lvv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 23 13:20:33.236: INFO: Pod daemon-set-sx2hh is not available Dec 23 13:20:34.334: INFO: Wrong image for pod: daemon-set-g8lvv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 23 13:20:34.334: INFO: Pod daemon-set-sx2hh is not available Dec 23 13:20:35.448: INFO: Wrong image for pod: daemon-set-g8lvv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 23 13:20:35.448: INFO: Pod daemon-set-sx2hh is not available Dec 23 13:20:36.238: INFO: Wrong image for pod: daemon-set-g8lvv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 23 13:20:36.238: INFO: Pod daemon-set-sx2hh is not available Dec 23 13:20:37.243: INFO: Wrong image for pod: daemon-set-g8lvv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 23 13:20:38.235: INFO: Wrong image for pod: daemon-set-g8lvv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 23 13:20:39.240: INFO: Wrong image for pod: daemon-set-g8lvv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 23 13:20:40.243: INFO: Wrong image for pod: daemon-set-g8lvv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 23 13:20:40.243: INFO: Pod daemon-set-g8lvv is not available Dec 23 13:20:41.236: INFO: Pod daemon-set-gcmkt is not available STEP: Check that daemon pods are still running on every node of the cluster. Dec 23 13:20:41.251: INFO: Number of nodes with available pods: 1 Dec 23 13:20:41.251: INFO: Node iruya-node is running more than one daemon pod Dec 23 13:20:42.266: INFO: Number of nodes with available pods: 1 Dec 23 13:20:42.267: INFO: Node iruya-node is running more than one daemon pod Dec 23 13:20:43.275: INFO: Number of nodes with available pods: 1 Dec 23 13:20:43.275: INFO: Node iruya-node is running more than one daemon pod Dec 23 13:20:44.278: INFO: Number of nodes with available pods: 1 Dec 23 13:20:44.278: INFO: Node iruya-node is running more than one daemon pod Dec 23 13:20:45.267: INFO: Number of nodes with available pods: 1 Dec 23 13:20:45.267: INFO: Node iruya-node is running more than one daemon pod Dec 23 13:20:46.271: INFO: Number of nodes with available pods: 1 Dec 23 13:20:46.271: INFO: Node iruya-node is running more than one daemon pod Dec 23 13:20:47.269: INFO: Number of nodes with available pods: 1 Dec 23 13:20:47.269: INFO: Node iruya-node is running more than one daemon pod Dec 23 13:20:48.269: INFO: Number of nodes with available pods: 2 Dec 23 13:20:48.269: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4410, will wait for the garbage collector to delete the pods Dec 23 13:20:48.358: INFO: Deleting DaemonSet.extensions daemon-set took: 10.28999ms Dec 23 13:20:48.659: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.727659ms Dec 23 13:20:55.070: INFO: Number of nodes with available pods: 0 Dec 23 13:20:55.070: INFO: Number of running nodes: 0, number of available pods: 0 Dec 23 13:20:55.076: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4410/daemonsets","resourceVersion":"17764908"},"items":null} Dec 23 13:20:55.080: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4410/pods","resourceVersion":"17764908"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:20:55.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4410" for this suite. Dec 23 13:21:01.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:21:01.276: INFO: namespace daemonsets-4410 deletion completed in 6.177602293s • [SLOW TEST:48.484 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:21:01.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Dec 23 13:21:01.374: INFO: Waiting up to 5m0s for pod "downward-api-90d7d4e7-d8ba-4ef0-9f43-b28441691516" in namespace "downward-api-9713" to be "success or failure" Dec 23 13:21:01.458: INFO: Pod "downward-api-90d7d4e7-d8ba-4ef0-9f43-b28441691516": Phase="Pending", Reason="", readiness=false. Elapsed: 84.132815ms Dec 23 13:21:03.471: INFO: Pod "downward-api-90d7d4e7-d8ba-4ef0-9f43-b28441691516": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096862643s Dec 23 13:21:05.490: INFO: Pod "downward-api-90d7d4e7-d8ba-4ef0-9f43-b28441691516": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116259139s Dec 23 13:21:07.507: INFO: Pod "downward-api-90d7d4e7-d8ba-4ef0-9f43-b28441691516": Phase="Pending", Reason="", readiness=false. Elapsed: 6.132600744s Dec 23 13:21:09.517: INFO: Pod "downward-api-90d7d4e7-d8ba-4ef0-9f43-b28441691516": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.14282114s STEP: Saw pod success Dec 23 13:21:09.517: INFO: Pod "downward-api-90d7d4e7-d8ba-4ef0-9f43-b28441691516" satisfied condition "success or failure" Dec 23 13:21:09.521: INFO: Trying to get logs from node iruya-node pod downward-api-90d7d4e7-d8ba-4ef0-9f43-b28441691516 container dapi-container: STEP: delete the pod Dec 23 13:21:09.628: INFO: Waiting for pod downward-api-90d7d4e7-d8ba-4ef0-9f43-b28441691516 to disappear Dec 23 13:21:09.641: INFO: Pod downward-api-90d7d4e7-d8ba-4ef0-9f43-b28441691516 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:21:09.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9713" for this suite. Dec 23 13:21:15.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:21:15.920: INFO: namespace downward-api-9713 deletion completed in 6.267722909s • [SLOW TEST:14.642 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:21:15.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-caaab197-954d-48d7-8e3d-93cf6b9b08c0 STEP: Creating a pod to test consume secrets Dec 23 13:21:16.222: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d7490769-e45b-4746-9cc3-c6d3f1387799" in namespace "projected-381" to be "success or failure" Dec 23 13:21:16.259: INFO: Pod "pod-projected-secrets-d7490769-e45b-4746-9cc3-c6d3f1387799": Phase="Pending", Reason="", readiness=false. Elapsed: 36.135518ms Dec 23 13:21:18.270: INFO: Pod "pod-projected-secrets-d7490769-e45b-4746-9cc3-c6d3f1387799": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047813301s Dec 23 13:21:20.284: INFO: Pod "pod-projected-secrets-d7490769-e45b-4746-9cc3-c6d3f1387799": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061985937s Dec 23 13:21:23.113: INFO: Pod "pod-projected-secrets-d7490769-e45b-4746-9cc3-c6d3f1387799": Phase="Pending", Reason="", readiness=false. Elapsed: 6.890169081s Dec 23 13:21:25.123: INFO: Pod "pod-projected-secrets-d7490769-e45b-4746-9cc3-c6d3f1387799": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.900660202s STEP: Saw pod success Dec 23 13:21:25.123: INFO: Pod "pod-projected-secrets-d7490769-e45b-4746-9cc3-c6d3f1387799" satisfied condition "success or failure" Dec 23 13:21:25.129: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-d7490769-e45b-4746-9cc3-c6d3f1387799 container projected-secret-volume-test: STEP: delete the pod Dec 23 13:21:25.233: INFO: Waiting for pod pod-projected-secrets-d7490769-e45b-4746-9cc3-c6d3f1387799 to disappear Dec 23 13:21:25.252: INFO: Pod pod-projected-secrets-d7490769-e45b-4746-9cc3-c6d3f1387799 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:21:25.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-381" for this suite. Dec 23 13:21:31.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:21:31.506: INFO: namespace projected-381 deletion completed in 6.228134924s • [SLOW TEST:15.586 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:21:31.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Dec 23 13:21:40.238: INFO: Successfully updated pod "labelsupdate9c692773-ae88-4324-a357-f8fc6e901ec9" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:21:42.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3928" for this suite. Dec 23 13:22:04.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:22:04.511: INFO: namespace downward-api-3928 deletion completed in 22.19466418s • [SLOW TEST:33.004 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:22:04.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-6gs9h in namespace proxy-8972 I1223 13:22:04.672120 8 runners.go:180] Created replication controller with name: proxy-service-6gs9h, namespace: proxy-8972, replica count: 1 I1223 13:22:05.724223 8 runners.go:180] proxy-service-6gs9h Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1223 13:22:06.724704 8 runners.go:180] proxy-service-6gs9h Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1223 13:22:07.725498 8 runners.go:180] proxy-service-6gs9h Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1223 13:22:08.726129 8 runners.go:180] proxy-service-6gs9h Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1223 13:22:09.726656 8 runners.go:180] proxy-service-6gs9h Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1223 13:22:10.727182 8 runners.go:180] proxy-service-6gs9h Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1223 13:22:11.727696 8 runners.go:180] proxy-service-6gs9h Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1223 13:22:12.728123 8 runners.go:180] proxy-service-6gs9h Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1223 13:22:13.728618 8 runners.go:180] proxy-service-6gs9h Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 23 13:22:13.743: INFO: setup took 9.149442724s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Dec 23 13:22:13.834: INFO: (0) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:160/proxy/: foo (200; 88.979126ms) Dec 23 13:22:13.834: INFO: (0) /api/v1/namespaces/proxy-8972/services/http:proxy-service-6gs9h:portname1/proxy/: foo (200; 90.070202ms) Dec 23 13:22:13.834: INFO: (0) /api/v1/namespaces/proxy-8972/services/proxy-service-6gs9h:portname1/proxy/: foo (200; 89.712222ms) Dec 23 13:22:13.834: INFO: (0) /api/v1/namespaces/proxy-8972/services/proxy-service-6gs9h:portname2/proxy/: bar (200; 88.855686ms) Dec 23 13:22:13.834: INFO: (0) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:160/proxy/: foo (200; 89.528454ms) Dec 23 13:22:13.834: INFO: (0) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:162/proxy/: bar (200; 90.330499ms) Dec 23 13:22:13.834: INFO: (0) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:1080/proxy/: test<... (200; 89.699648ms) Dec 23 13:22:13.834: INFO: (0) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x/proxy/: test (200; 90.74399ms) Dec 23 13:22:13.834: INFO: (0) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:1080/proxy/: ... (200; 90.767761ms) Dec 23 13:22:13.835: INFO: (0) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:162/proxy/: bar (200; 89.932666ms) Dec 23 13:22:13.835: INFO: (0) /api/v1/namespaces/proxy-8972/services/http:proxy-service-6gs9h:portname2/proxy/: bar (200; 90.224417ms) Dec 23 13:22:13.862: INFO: (0) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:462/proxy/: tls qux (200; 118.519295ms) Dec 23 13:22:13.863: INFO: (0) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:460/proxy/: tls baz (200; 118.715659ms) Dec 23 13:22:13.862: INFO: (0) /api/v1/namespaces/proxy-8972/services/https:proxy-service-6gs9h:tlsportname2/proxy/: tls qux (200; 118.394051ms) Dec 23 13:22:13.863: INFO: (0) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:443/proxy/: ... (200; 34.597724ms) Dec 23 13:22:13.900: INFO: (1) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:462/proxy/: tls qux (200; 34.115146ms) Dec 23 13:22:13.900: INFO: (1) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:162/proxy/: bar (200; 35.580045ms) Dec 23 13:22:13.904: INFO: (1) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:160/proxy/: foo (200; 39.493996ms) Dec 23 13:22:13.904: INFO: (1) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:1080/proxy/: test<... (200; 39.03696ms) Dec 23 13:22:13.904: INFO: (1) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:460/proxy/: tls baz (200; 39.185301ms) Dec 23 13:22:13.905: INFO: (1) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:160/proxy/: foo (200; 39.124899ms) Dec 23 13:22:13.905: INFO: (1) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x/proxy/: test (200; 39.254707ms) Dec 23 13:22:13.905: INFO: (1) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:162/proxy/: bar (200; 40.311839ms) Dec 23 13:22:13.905: INFO: (1) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:443/proxy/: test<... (200; 37.115641ms) Dec 23 13:22:13.948: INFO: (2) /api/v1/namespaces/proxy-8972/services/http:proxy-service-6gs9h:portname2/proxy/: bar (200; 37.127765ms) Dec 23 13:22:13.948: INFO: (2) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:462/proxy/: tls qux (200; 37.125507ms) Dec 23 13:22:13.948: INFO: (2) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:1080/proxy/: ... (200; 37.009542ms) Dec 23 13:22:13.948: INFO: (2) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x/proxy/: test (200; 36.778544ms) Dec 23 13:22:13.948: INFO: (2) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:443/proxy/: test (200; 24.978343ms) Dec 23 13:22:13.974: INFO: (3) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:160/proxy/: foo (200; 24.944684ms) Dec 23 13:22:13.974: INFO: (3) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:462/proxy/: tls qux (200; 25.242606ms) Dec 23 13:22:13.974: INFO: (3) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:1080/proxy/: test<... (200; 25.489281ms) Dec 23 13:22:13.974: INFO: (3) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:460/proxy/: tls baz (200; 25.597824ms) Dec 23 13:22:13.974: INFO: (3) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:162/proxy/: bar (200; 25.604248ms) Dec 23 13:22:13.975: INFO: (3) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:160/proxy/: foo (200; 26.064348ms) Dec 23 13:22:13.976: INFO: (3) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:1080/proxy/: ... (200; 27.476457ms) Dec 23 13:22:13.977: INFO: (3) /api/v1/namespaces/proxy-8972/services/proxy-service-6gs9h:portname1/proxy/: foo (200; 28.106975ms) Dec 23 13:22:13.977: INFO: (3) /api/v1/namespaces/proxy-8972/services/http:proxy-service-6gs9h:portname1/proxy/: foo (200; 28.553249ms) Dec 23 13:22:13.978: INFO: (3) /api/v1/namespaces/proxy-8972/services/http:proxy-service-6gs9h:portname2/proxy/: bar (200; 29.337074ms) Dec 23 13:22:13.978: INFO: (3) /api/v1/namespaces/proxy-8972/services/https:proxy-service-6gs9h:tlsportname1/proxy/: tls baz (200; 28.678096ms) Dec 23 13:22:13.978: INFO: (3) /api/v1/namespaces/proxy-8972/services/https:proxy-service-6gs9h:tlsportname2/proxy/: tls qux (200; 29.075896ms) Dec 23 13:22:13.979: INFO: (3) /api/v1/namespaces/proxy-8972/services/proxy-service-6gs9h:portname2/proxy/: bar (200; 30.048934ms) Dec 23 13:22:13.995: INFO: (4) /api/v1/namespaces/proxy-8972/services/http:proxy-service-6gs9h:portname2/proxy/: bar (200; 15.825555ms) Dec 23 13:22:13.996: INFO: (4) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:162/proxy/: bar (200; 16.788217ms) Dec 23 13:22:13.998: INFO: (4) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:162/proxy/: bar (200; 18.264954ms) Dec 23 13:22:13.999: INFO: (4) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:1080/proxy/: test<... (200; 19.131047ms) Dec 23 13:22:14.000: INFO: (4) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:1080/proxy/: ... (200; 20.102711ms) Dec 23 13:22:14.000: INFO: (4) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:443/proxy/: test (200; 22.35486ms) Dec 23 13:22:14.003: INFO: (4) /api/v1/namespaces/proxy-8972/services/proxy-service-6gs9h:portname1/proxy/: foo (200; 23.688305ms) Dec 23 13:22:14.003: INFO: (4) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:160/proxy/: foo (200; 23.791283ms) Dec 23 13:22:14.004: INFO: (4) /api/v1/namespaces/proxy-8972/services/https:proxy-service-6gs9h:tlsportname2/proxy/: tls qux (200; 24.730482ms) Dec 23 13:22:14.004: INFO: (4) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:460/proxy/: tls baz (200; 24.417411ms) Dec 23 13:22:14.004: INFO: (4) /api/v1/namespaces/proxy-8972/services/http:proxy-service-6gs9h:portname1/proxy/: foo (200; 24.887497ms) Dec 23 13:22:14.004: INFO: (4) /api/v1/namespaces/proxy-8972/services/https:proxy-service-6gs9h:tlsportname1/proxy/: tls baz (200; 24.94894ms) Dec 23 13:22:14.004: INFO: (4) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:462/proxy/: tls qux (200; 25.159164ms) Dec 23 13:22:14.014: INFO: (5) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:1080/proxy/: test<... (200; 9.518977ms) Dec 23 13:22:14.021: INFO: (5) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:160/proxy/: foo (200; 16.492848ms) Dec 23 13:22:14.021: INFO: (5) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:462/proxy/: tls qux (200; 16.269304ms) Dec 23 13:22:14.022: INFO: (5) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:1080/proxy/: ... (200; 17.374944ms) Dec 23 13:22:14.026: INFO: (5) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:160/proxy/: foo (200; 20.871972ms) Dec 23 13:22:14.026: INFO: (5) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:162/proxy/: bar (200; 20.934939ms) Dec 23 13:22:14.026: INFO: (5) /api/v1/namespaces/proxy-8972/services/http:proxy-service-6gs9h:portname1/proxy/: foo (200; 21.55163ms) Dec 23 13:22:14.027: INFO: (5) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:162/proxy/: bar (200; 22.098364ms) Dec 23 13:22:14.027: INFO: (5) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x/proxy/: test (200; 22.873119ms) Dec 23 13:22:14.028: INFO: (5) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:460/proxy/: tls baz (200; 23.322812ms) Dec 23 13:22:14.028: INFO: (5) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:443/proxy/: test (200; 15.809015ms) Dec 23 13:22:14.048: INFO: (6) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:1080/proxy/: ... (200; 15.900674ms) Dec 23 13:22:14.048: INFO: (6) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:1080/proxy/: test<... (200; 16.582852ms) Dec 23 13:22:14.048: INFO: (6) /api/v1/namespaces/proxy-8972/services/proxy-service-6gs9h:portname2/proxy/: bar (200; 16.77164ms) Dec 23 13:22:14.048: INFO: (6) /api/v1/namespaces/proxy-8972/services/http:proxy-service-6gs9h:portname2/proxy/: bar (200; 17.384972ms) Dec 23 13:22:14.049: INFO: (6) /api/v1/namespaces/proxy-8972/services/proxy-service-6gs9h:portname1/proxy/: foo (200; 16.980434ms) Dec 23 13:22:14.049: INFO: (6) /api/v1/namespaces/proxy-8972/services/https:proxy-service-6gs9h:tlsportname2/proxy/: tls qux (200; 17.616897ms) Dec 23 13:22:14.050: INFO: (6) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:160/proxy/: foo (200; 18.476371ms) Dec 23 13:22:14.050: INFO: (6) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:443/proxy/: test (200; 17.870958ms) Dec 23 13:22:14.072: INFO: (7) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:162/proxy/: bar (200; 19.025964ms) Dec 23 13:22:14.072: INFO: (7) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:460/proxy/: tls baz (200; 18.735736ms) Dec 23 13:22:14.072: INFO: (7) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:443/proxy/: ... (200; 18.807543ms) Dec 23 13:22:14.073: INFO: (7) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:160/proxy/: foo (200; 19.187692ms) Dec 23 13:22:14.073: INFO: (7) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:160/proxy/: foo (200; 18.791359ms) Dec 23 13:22:14.073: INFO: (7) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:462/proxy/: tls qux (200; 20.314233ms) Dec 23 13:22:14.074: INFO: (7) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:162/proxy/: bar (200; 20.896219ms) Dec 23 13:22:14.079: INFO: (7) /api/v1/namespaces/proxy-8972/services/https:proxy-service-6gs9h:tlsportname1/proxy/: tls baz (200; 25.576764ms) Dec 23 13:22:14.079: INFO: (7) /api/v1/namespaces/proxy-8972/services/proxy-service-6gs9h:portname2/proxy/: bar (200; 25.799195ms) Dec 23 13:22:14.079: INFO: (7) /api/v1/namespaces/proxy-8972/services/https:proxy-service-6gs9h:tlsportname2/proxy/: tls qux (200; 25.897491ms) Dec 23 13:22:14.079: INFO: (7) /api/v1/namespaces/proxy-8972/services/http:proxy-service-6gs9h:portname1/proxy/: foo (200; 25.427552ms) Dec 23 13:22:14.079: INFO: (7) /api/v1/namespaces/proxy-8972/services/proxy-service-6gs9h:portname1/proxy/: foo (200; 25.169338ms) Dec 23 13:22:14.079: INFO: (7) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:1080/proxy/: test<... (200; 26.422182ms) Dec 23 13:22:14.080: INFO: (7) /api/v1/namespaces/proxy-8972/services/http:proxy-service-6gs9h:portname2/proxy/: bar (200; 26.077419ms) Dec 23 13:22:14.101: INFO: (8) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:1080/proxy/: test<... (200; 20.208733ms) Dec 23 13:22:14.102: INFO: (8) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:162/proxy/: bar (200; 20.872662ms) Dec 23 13:22:14.102: INFO: (8) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:162/proxy/: bar (200; 20.173464ms) Dec 23 13:22:14.102: INFO: (8) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x/proxy/: test (200; 19.554534ms) Dec 23 13:22:14.102: INFO: (8) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:1080/proxy/: ... (200; 18.862559ms) Dec 23 13:22:14.105: INFO: (8) /api/v1/namespaces/proxy-8972/services/proxy-service-6gs9h:portname2/proxy/: bar (200; 23.464567ms) Dec 23 13:22:14.107: INFO: (8) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:460/proxy/: tls baz (200; 24.307467ms) Dec 23 13:22:14.107: INFO: (8) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:160/proxy/: foo (200; 25.882491ms) Dec 23 13:22:14.107: INFO: (8) /api/v1/namespaces/proxy-8972/services/https:proxy-service-6gs9h:tlsportname1/proxy/: tls baz (200; 26.634165ms) Dec 23 13:22:14.107: INFO: (8) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:462/proxy/: tls qux (200; 24.396172ms) Dec 23 13:22:14.108: INFO: (8) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:160/proxy/: foo (200; 25.939607ms) Dec 23 13:22:14.108: INFO: (8) /api/v1/namespaces/proxy-8972/services/http:proxy-service-6gs9h:portname2/proxy/: bar (200; 24.720655ms) Dec 23 13:22:14.108: INFO: (8) /api/v1/namespaces/proxy-8972/services/https:proxy-service-6gs9h:tlsportname2/proxy/: tls qux (200; 27.785804ms) Dec 23 13:22:14.109: INFO: (8) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:443/proxy/: test<... (200; 18.422762ms) Dec 23 13:22:14.129: INFO: (9) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:162/proxy/: bar (200; 18.879587ms) Dec 23 13:22:14.129: INFO: (9) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:1080/proxy/: ... (200; 18.25502ms) Dec 23 13:22:14.129: INFO: (9) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:160/proxy/: foo (200; 18.323903ms) Dec 23 13:22:14.129: INFO: (9) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:162/proxy/: bar (200; 18.995767ms) Dec 23 13:22:14.129: INFO: (9) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:443/proxy/: test (200; 21.380073ms) Dec 23 13:22:14.132: INFO: (9) /api/v1/namespaces/proxy-8972/services/http:proxy-service-6gs9h:portname1/proxy/: foo (200; 22.685149ms) Dec 23 13:22:14.133: INFO: (9) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:460/proxy/: tls baz (200; 22.412078ms) Dec 23 13:22:14.133: INFO: (9) /api/v1/namespaces/proxy-8972/services/https:proxy-service-6gs9h:tlsportname1/proxy/: tls baz (200; 22.802962ms) Dec 23 13:22:14.133: INFO: (9) /api/v1/namespaces/proxy-8972/services/http:proxy-service-6gs9h:portname2/proxy/: bar (200; 22.813779ms) Dec 23 13:22:14.134: INFO: (9) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:160/proxy/: foo (200; 24.429519ms) Dec 23 13:22:14.135: INFO: (9) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:462/proxy/: tls qux (200; 24.214719ms) Dec 23 13:22:14.135: INFO: (9) /api/v1/namespaces/proxy-8972/services/https:proxy-service-6gs9h:tlsportname2/proxy/: tls qux (200; 24.908583ms) Dec 23 13:22:14.144: INFO: (10) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:460/proxy/: tls baz (200; 8.17687ms) Dec 23 13:22:14.144: INFO: (10) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:162/proxy/: bar (200; 7.725551ms) Dec 23 13:22:14.146: INFO: (10) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:462/proxy/: tls qux (200; 10.356719ms) Dec 23 13:22:14.147: INFO: (10) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:1080/proxy/: test<... (200; 10.288084ms) Dec 23 13:22:14.147: INFO: (10) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:1080/proxy/: ... (200; 10.830467ms) Dec 23 13:22:14.147: INFO: (10) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:160/proxy/: foo (200; 11.101622ms) Dec 23 13:22:14.147: INFO: (10) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x/proxy/: test (200; 11.579514ms) Dec 23 13:22:14.147: INFO: (10) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:162/proxy/: bar (200; 11.468913ms) Dec 23 13:22:14.147: INFO: (10) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:443/proxy/: test<... (200; 8.27638ms) Dec 23 13:22:14.158: INFO: (11) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:160/proxy/: foo (200; 8.52916ms) Dec 23 13:22:14.158: INFO: (11) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:443/proxy/: ... (200; 9.353732ms) Dec 23 13:22:14.158: INFO: (11) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x/proxy/: test (200; 9.329734ms) Dec 23 13:22:14.159: INFO: (11) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:462/proxy/: tls qux (200; 9.725693ms) Dec 23 13:22:14.160: INFO: (11) /api/v1/namespaces/proxy-8972/services/proxy-service-6gs9h:portname2/proxy/: bar (200; 11.290486ms) Dec 23 13:22:14.163: INFO: (11) /api/v1/namespaces/proxy-8972/services/https:proxy-service-6gs9h:tlsportname2/proxy/: tls qux (200; 13.789929ms) Dec 23 13:22:14.163: INFO: (11) /api/v1/namespaces/proxy-8972/services/https:proxy-service-6gs9h:tlsportname1/proxy/: tls baz (200; 14.067255ms) Dec 23 13:22:14.164: INFO: (11) /api/v1/namespaces/proxy-8972/services/http:proxy-service-6gs9h:portname1/proxy/: foo (200; 14.683544ms) Dec 23 13:22:14.164: INFO: (11) /api/v1/namespaces/proxy-8972/services/proxy-service-6gs9h:portname1/proxy/: foo (200; 15.236139ms) Dec 23 13:22:14.164: INFO: (11) /api/v1/namespaces/proxy-8972/services/http:proxy-service-6gs9h:portname2/proxy/: bar (200; 15.115592ms) Dec 23 13:22:14.180: INFO: (12) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:160/proxy/: foo (200; 15.537331ms) Dec 23 13:22:14.183: INFO: (12) /api/v1/namespaces/proxy-8972/services/https:proxy-service-6gs9h:tlsportname2/proxy/: tls qux (200; 18.072017ms) Dec 23 13:22:14.183: INFO: (12) /api/v1/namespaces/proxy-8972/services/proxy-service-6gs9h:portname2/proxy/: bar (200; 18.139207ms) Dec 23 13:22:14.184: INFO: (12) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:162/proxy/: bar (200; 19.794294ms) Dec 23 13:22:14.184: INFO: (12) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:1080/proxy/: ... (200; 19.670172ms) Dec 23 13:22:14.184: INFO: (12) /api/v1/namespaces/proxy-8972/services/proxy-service-6gs9h:portname1/proxy/: foo (200; 20.037945ms) Dec 23 13:22:14.185: INFO: (12) /api/v1/namespaces/proxy-8972/services/http:proxy-service-6gs9h:portname1/proxy/: foo (200; 20.88337ms) Dec 23 13:22:14.187: INFO: (12) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:1080/proxy/: test<... (200; 22.401547ms) Dec 23 13:22:14.187: INFO: (12) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:162/proxy/: bar (200; 22.233765ms) Dec 23 13:22:14.187: INFO: (12) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:462/proxy/: tls qux (200; 22.775966ms) Dec 23 13:22:14.187: INFO: (12) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x/proxy/: test (200; 22.609038ms) Dec 23 13:22:14.187: INFO: (12) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:160/proxy/: foo (200; 22.817148ms) Dec 23 13:22:14.187: INFO: (12) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:443/proxy/: test<... (200; 10.701549ms) Dec 23 13:22:14.201: INFO: (13) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:162/proxy/: bar (200; 10.64349ms) Dec 23 13:22:14.201: INFO: (13) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:162/proxy/: bar (200; 11.836699ms) Dec 23 13:22:14.202: INFO: (13) /api/v1/namespaces/proxy-8972/services/http:proxy-service-6gs9h:portname1/proxy/: foo (200; 11.632018ms) Dec 23 13:22:14.202: INFO: (13) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:160/proxy/: foo (200; 12.591273ms) Dec 23 13:22:14.203: INFO: (13) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:1080/proxy/: ... (200; 13.688492ms) Dec 23 13:22:14.203: INFO: (13) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:443/proxy/: test (200; 14.67746ms) Dec 23 13:22:14.205: INFO: (13) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:160/proxy/: foo (200; 14.822742ms) Dec 23 13:22:14.219: INFO: (14) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:460/proxy/: tls baz (200; 13.276461ms) Dec 23 13:22:14.219: INFO: (14) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:160/proxy/: foo (200; 13.345522ms) Dec 23 13:22:14.219: INFO: (14) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:162/proxy/: bar (200; 14.003204ms) Dec 23 13:22:14.220: INFO: (14) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:443/proxy/: test<... (200; 15.104129ms) Dec 23 13:22:14.221: INFO: (14) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:1080/proxy/: ... (200; 15.921858ms) Dec 23 13:22:14.221: INFO: (14) /api/v1/namespaces/proxy-8972/services/https:proxy-service-6gs9h:tlsportname2/proxy/: tls qux (200; 16.086516ms) Dec 23 13:22:14.221: INFO: (14) /api/v1/namespaces/proxy-8972/services/proxy-service-6gs9h:portname1/proxy/: foo (200; 15.641323ms) Dec 23 13:22:14.221: INFO: (14) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:462/proxy/: tls qux (200; 16.234504ms) Dec 23 13:22:14.221: INFO: (14) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:162/proxy/: bar (200; 16.064749ms) Dec 23 13:22:14.222: INFO: (14) /api/v1/namespaces/proxy-8972/services/https:proxy-service-6gs9h:tlsportname1/proxy/: tls baz (200; 16.651162ms) Dec 23 13:22:14.222: INFO: (14) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:160/proxy/: foo (200; 16.547369ms) Dec 23 13:22:14.222: INFO: (14) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x/proxy/: test (200; 16.458162ms) Dec 23 13:22:14.223: INFO: (14) /api/v1/namespaces/proxy-8972/services/proxy-service-6gs9h:portname2/proxy/: bar (200; 17.571414ms) Dec 23 13:22:14.223: INFO: (14) /api/v1/namespaces/proxy-8972/services/http:proxy-service-6gs9h:portname2/proxy/: bar (200; 17.424246ms) Dec 23 13:22:14.224: INFO: (14) /api/v1/namespaces/proxy-8972/services/http:proxy-service-6gs9h:portname1/proxy/: foo (200; 17.979181ms) Dec 23 13:22:14.230: INFO: (15) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:1080/proxy/: test<... (200; 6.039856ms) Dec 23 13:22:14.231: INFO: (15) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x/proxy/: test (200; 7.379052ms) Dec 23 13:22:14.231: INFO: (15) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:160/proxy/: foo (200; 7.243967ms) Dec 23 13:22:14.232: INFO: (15) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:160/proxy/: foo (200; 7.445878ms) Dec 23 13:22:14.233: INFO: (15) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:1080/proxy/: ... (200; 8.24187ms) Dec 23 13:22:14.234: INFO: (15) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:462/proxy/: tls qux (200; 9.777119ms) Dec 23 13:22:14.234: INFO: (15) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:162/proxy/: bar (200; 9.826366ms) Dec 23 13:22:14.234: INFO: (15) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:443/proxy/: test (200; 6.875958ms) Dec 23 13:22:14.248: INFO: (16) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:160/proxy/: foo (200; 6.797036ms) Dec 23 13:22:14.248: INFO: (16) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:1080/proxy/: ... (200; 6.863211ms) Dec 23 13:22:14.248: INFO: (16) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:1080/proxy/: test<... (200; 7.639994ms) Dec 23 13:22:14.249: INFO: (16) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:460/proxy/: tls baz (200; 7.707009ms) Dec 23 13:22:14.251: INFO: (16) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:462/proxy/: tls qux (200; 10.024714ms) Dec 23 13:22:14.253: INFO: (16) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:160/proxy/: foo (200; 11.914069ms) Dec 23 13:22:14.253: INFO: (16) /api/v1/namespaces/proxy-8972/services/http:proxy-service-6gs9h:portname2/proxy/: bar (200; 11.729352ms) Dec 23 13:22:14.254: INFO: (16) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:162/proxy/: bar (200; 12.634335ms) Dec 23 13:22:14.255: INFO: (16) /api/v1/namespaces/proxy-8972/services/https:proxy-service-6gs9h:tlsportname1/proxy/: tls baz (200; 13.657256ms) Dec 23 13:22:14.257: INFO: (16) /api/v1/namespaces/proxy-8972/services/proxy-service-6gs9h:portname2/proxy/: bar (200; 15.481406ms) Dec 23 13:22:14.257: INFO: (16) /api/v1/namespaces/proxy-8972/services/http:proxy-service-6gs9h:portname1/proxy/: foo (200; 15.658186ms) Dec 23 13:22:14.257: INFO: (16) /api/v1/namespaces/proxy-8972/services/proxy-service-6gs9h:portname1/proxy/: foo (200; 16.257292ms) Dec 23 13:22:14.260: INFO: (16) /api/v1/namespaces/proxy-8972/services/https:proxy-service-6gs9h:tlsportname2/proxy/: tls qux (200; 18.823384ms) Dec 23 13:22:14.276: INFO: (17) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:160/proxy/: foo (200; 15.315898ms) Dec 23 13:22:14.276: INFO: (17) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:1080/proxy/: ... (200; 15.389064ms) Dec 23 13:22:14.276: INFO: (17) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:162/proxy/: bar (200; 15.2905ms) Dec 23 13:22:14.276: INFO: (17) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:160/proxy/: foo (200; 15.673932ms) Dec 23 13:22:14.277: INFO: (17) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:443/proxy/: test (200; 16.806348ms) Dec 23 13:22:14.277: INFO: (17) /api/v1/namespaces/proxy-8972/services/https:proxy-service-6gs9h:tlsportname2/proxy/: tls qux (200; 17.393541ms) Dec 23 13:22:14.277: INFO: (17) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:462/proxy/: tls qux (200; 17.437987ms) Dec 23 13:22:14.279: INFO: (17) /api/v1/namespaces/proxy-8972/services/http:proxy-service-6gs9h:portname2/proxy/: bar (200; 18.342614ms) Dec 23 13:22:14.279: INFO: (17) /api/v1/namespaces/proxy-8972/services/http:proxy-service-6gs9h:portname1/proxy/: foo (200; 18.635615ms) Dec 23 13:22:14.279: INFO: (17) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:1080/proxy/: test<... (200; 18.926963ms) Dec 23 13:22:14.279: INFO: (17) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:460/proxy/: tls baz (200; 19.105921ms) Dec 23 13:22:14.281: INFO: (17) /api/v1/namespaces/proxy-8972/services/proxy-service-6gs9h:portname1/proxy/: foo (200; 20.695485ms) Dec 23 13:22:14.281: INFO: (17) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:162/proxy/: bar (200; 21.219903ms) Dec 23 13:22:14.290: INFO: (18) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:462/proxy/: tls qux (200; 8.166239ms) Dec 23 13:22:14.290: INFO: (18) /api/v1/namespaces/proxy-8972/services/proxy-service-6gs9h:portname1/proxy/: foo (200; 8.419217ms) Dec 23 13:22:14.290: INFO: (18) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:160/proxy/: foo (200; 8.103423ms) Dec 23 13:22:14.291: INFO: (18) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:460/proxy/: tls baz (200; 9.607569ms) Dec 23 13:22:14.291: INFO: (18) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:443/proxy/: ... (200; 10.601311ms) Dec 23 13:22:14.294: INFO: (18) /api/v1/namespaces/proxy-8972/services/http:proxy-service-6gs9h:portname2/proxy/: bar (200; 12.113583ms) Dec 23 13:22:14.294: INFO: (18) /api/v1/namespaces/proxy-8972/services/proxy-service-6gs9h:portname2/proxy/: bar (200; 12.548709ms) Dec 23 13:22:14.295: INFO: (18) /api/v1/namespaces/proxy-8972/services/http:proxy-service-6gs9h:portname1/proxy/: foo (200; 13.279615ms) Dec 23 13:22:14.297: INFO: (18) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:1080/proxy/: test<... (200; 14.840411ms) Dec 23 13:22:14.297: INFO: (18) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:160/proxy/: foo (200; 15.605995ms) Dec 23 13:22:14.297: INFO: (18) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x/proxy/: test (200; 15.758194ms) Dec 23 13:22:14.297: INFO: (18) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:162/proxy/: bar (200; 15.981597ms) Dec 23 13:22:14.297: INFO: (18) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:162/proxy/: bar (200; 15.586569ms) Dec 23 13:22:14.300: INFO: (18) /api/v1/namespaces/proxy-8972/services/https:proxy-service-6gs9h:tlsportname1/proxy/: tls baz (200; 18.60282ms) Dec 23 13:22:14.300: INFO: (18) /api/v1/namespaces/proxy-8972/services/https:proxy-service-6gs9h:tlsportname2/proxy/: tls qux (200; 18.652408ms) Dec 23 13:22:14.316: INFO: (19) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:462/proxy/: tls qux (200; 15.987968ms) Dec 23 13:22:14.316: INFO: (19) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:162/proxy/: bar (200; 15.579124ms) Dec 23 13:22:14.317: INFO: (19) /api/v1/namespaces/proxy-8972/services/http:proxy-service-6gs9h:portname2/proxy/: bar (200; 15.987073ms) Dec 23 13:22:14.317: INFO: (19) /api/v1/namespaces/proxy-8972/services/proxy-service-6gs9h:portname1/proxy/: foo (200; 16.652843ms) Dec 23 13:22:14.317: INFO: (19) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:160/proxy/: foo (200; 16.515329ms) Dec 23 13:22:14.317: INFO: (19) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:443/proxy/: test<... (200; 18.276805ms) Dec 23 13:22:14.319: INFO: (19) /api/v1/namespaces/proxy-8972/pods/https:proxy-service-6gs9h-pjl5x:460/proxy/: tls baz (200; 18.747709ms) Dec 23 13:22:14.319: INFO: (19) /api/v1/namespaces/proxy-8972/services/proxy-service-6gs9h:portname2/proxy/: bar (200; 18.560089ms) Dec 23 13:22:14.319: INFO: (19) /api/v1/namespaces/proxy-8972/services/https:proxy-service-6gs9h:tlsportname1/proxy/: tls baz (200; 18.71263ms) Dec 23 13:22:14.320: INFO: (19) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x:162/proxy/: bar (200; 18.989958ms) Dec 23 13:22:14.320: INFO: (19) /api/v1/namespaces/proxy-8972/pods/proxy-service-6gs9h-pjl5x/proxy/: test (200; 19.597068ms) Dec 23 13:22:14.320: INFO: (19) /api/v1/namespaces/proxy-8972/services/https:proxy-service-6gs9h:tlsportname2/proxy/: tls qux (200; 19.65567ms) Dec 23 13:22:14.321: INFO: (19) /api/v1/namespaces/proxy-8972/services/http:proxy-service-6gs9h:portname1/proxy/: foo (200; 20.059843ms) Dec 23 13:22:14.321: INFO: (19) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:1080/proxy/: ... (200; 20.105316ms) Dec 23 13:22:14.321: INFO: (19) /api/v1/namespaces/proxy-8972/pods/http:proxy-service-6gs9h-pjl5x:160/proxy/: foo (200; 20.174498ms) STEP: deleting ReplicationController proxy-service-6gs9h in namespace proxy-8972, will wait for the garbage collector to delete the pods Dec 23 13:22:14.394: INFO: Deleting ReplicationController proxy-service-6gs9h took: 12.356773ms Dec 23 13:22:14.696: INFO: Terminating ReplicationController proxy-service-6gs9h pods took: 301.484737ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:22:26.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-8972" for this suite. Dec 23 13:22:32.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:22:32.880: INFO: namespace proxy-8972 deletion completed in 6.142786981s • [SLOW TEST:28.367 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:22:32.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 23 13:22:33.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-5590' Dec 23 13:22:33.341: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 23 13:22:33.342: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Dec 23 13:22:33.443: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-58l62] Dec 23 13:22:33.444: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-58l62" in namespace "kubectl-5590" to be "running and ready" Dec 23 13:22:33.457: INFO: Pod "e2e-test-nginx-rc-58l62": Phase="Pending", Reason="", readiness=false. Elapsed: 12.938357ms Dec 23 13:22:35.466: INFO: Pod "e2e-test-nginx-rc-58l62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022868618s Dec 23 13:22:37.478: INFO: Pod "e2e-test-nginx-rc-58l62": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034339576s Dec 23 13:22:39.488: INFO: Pod "e2e-test-nginx-rc-58l62": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043965091s Dec 23 13:22:41.498: INFO: Pod "e2e-test-nginx-rc-58l62": Phase="Running", Reason="", readiness=true. Elapsed: 8.054370153s Dec 23 13:22:41.498: INFO: Pod "e2e-test-nginx-rc-58l62" satisfied condition "running and ready" Dec 23 13:22:41.498: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-58l62] Dec 23 13:22:41.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-5590' Dec 23 13:22:41.755: INFO: stderr: "" Dec 23 13:22:41.755: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Dec 23 13:22:41.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-5590' Dec 23 13:22:41.942: INFO: stderr: "" Dec 23 13:22:41.942: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:22:41.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5590" for this suite. Dec 23 13:23:04.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:23:04.123: INFO: namespace kubectl-5590 deletion completed in 22.140280865s • [SLOW TEST:31.243 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:23:04.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-2244 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-2244 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2244 Dec 23 13:23:04.344: INFO: Found 0 stateful pods, waiting for 1 Dec 23 13:23:14.367: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Dec 23 13:23:14.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2244 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 23 13:23:15.033: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 23 13:23:15.033: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 23 13:23:15.033: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 23 13:23:15.051: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Dec 23 13:23:25.069: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 23 13:23:25.070: INFO: Waiting for statefulset status.replicas updated to 0 Dec 23 13:23:25.096: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.99999874s Dec 23 13:23:26.108: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.988545026s Dec 23 13:23:27.121: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.975668806s Dec 23 13:23:28.135: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.963465221s Dec 23 13:23:29.149: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.949374952s Dec 23 13:23:30.162: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.935418155s Dec 23 13:23:31.176: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.922235358s Dec 23 13:23:32.186: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.907944561s Dec 23 13:23:33.198: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.898213347s Dec 23 13:23:34.209: INFO: Verifying statefulset ss doesn't scale past 1 for another 886.539767ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2244 Dec 23 13:23:35.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2244 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 13:23:35.842: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Dec 23 13:23:35.843: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 23 13:23:35.843: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 23 13:23:35.872: INFO: Found 2 stateful pods, waiting for 3 Dec 23 13:23:45.884: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 23 13:23:45.884: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Dec 23 13:23:45.884: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Dec 23 13:23:55.890: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 23 13:23:55.890: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Dec 23 13:23:55.890: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Dec 23 13:23:55.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2244 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 23 13:23:56.396: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 23 13:23:56.397: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 23 13:23:56.397: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 23 13:23:56.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2244 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 23 13:23:56.862: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 23 13:23:56.862: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 23 13:23:56.862: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 23 13:23:56.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2244 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 23 13:23:57.319: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 23 13:23:57.319: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 23 13:23:57.319: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 23 13:23:57.319: INFO: Waiting for statefulset status.replicas updated to 0 Dec 23 13:23:57.327: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Dec 23 13:24:07.344: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 23 13:24:07.344: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Dec 23 13:24:07.344: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Dec 23 13:24:07.452: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999498s Dec 23 13:24:08.466: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.901922796s Dec 23 13:24:09.477: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.888216145s Dec 23 13:24:10.507: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.876193961s Dec 23 13:24:11.522: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.846872865s Dec 23 13:24:12.949: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.831631825s Dec 23 13:24:13.960: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.405181881s Dec 23 13:24:14.971: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.39394729s Dec 23 13:24:15.982: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.383301238s Dec 23 13:24:16.992: INFO: Verifying statefulset ss doesn't scale past 3 for another 372.373467ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2244 Dec 23 13:24:18.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2244 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 13:24:18.705: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Dec 23 13:24:18.705: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 23 13:24:18.705: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 23 13:24:18.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2244 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 13:24:19.199: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Dec 23 13:24:19.199: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 23 13:24:19.199: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 23 13:24:19.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2244 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 13:24:19.797: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Dec 23 13:24:19.798: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 23 13:24:19.798: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 23 13:24:19.798: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Dec 23 13:24:49.936: INFO: Deleting all statefulset in ns statefulset-2244 Dec 23 13:24:49.941: INFO: Scaling statefulset ss to 0 Dec 23 13:24:49.954: INFO: Waiting for statefulset status.replicas updated to 0 Dec 23 13:24:49.957: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:24:50.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2244" for this suite. Dec 23 13:24:56.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:24:56.160: INFO: namespace statefulset-2244 deletion completed in 6.116344465s • [SLOW TEST:112.036 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:24:56.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Dec 23 13:24:56.207: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:24:56.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3139" for this suite. Dec 23 13:25:02.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:25:02.583: INFO: namespace kubectl-3139 deletion completed in 6.258964781s • [SLOW TEST:6.423 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:25:02.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-c4d449aa-1775-4656-a722-77047525639e in namespace container-probe-2833 Dec 23 13:25:10.799: INFO: Started pod busybox-c4d449aa-1775-4656-a722-77047525639e in namespace container-probe-2833 STEP: checking the pod's current state and verifying that restartCount is present Dec 23 13:25:10.804: INFO: Initial restart count of pod busybox-c4d449aa-1775-4656-a722-77047525639e is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:29:12.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2833" for this suite. Dec 23 13:29:18.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:29:18.438: INFO: namespace container-probe-2833 deletion completed in 6.188397674s • [SLOW TEST:255.855 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:29:18.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Dec 23 13:29:27.206: INFO: Successfully updated pod "labelsupdated107f083-7b31-434e-aaff-7612fcf568dc" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:29:29.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4404" for this suite. Dec 23 13:29:53.362: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:29:53.513: INFO: namespace projected-4404 deletion completed in 24.19448013s • [SLOW TEST:35.073 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:29:53.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 23 13:29:53.641: INFO: Create a RollingUpdate DaemonSet Dec 23 13:29:53.646: INFO: Check that daemon pods launch on every node of the cluster Dec 23 13:29:53.662: INFO: Number of nodes with available pods: 0 Dec 23 13:29:53.663: INFO: Node iruya-node is running more than one daemon pod Dec 23 13:29:54.684: INFO: Number of nodes with available pods: 0 Dec 23 13:29:54.684: INFO: Node iruya-node is running more than one daemon pod Dec 23 13:29:55.984: INFO: Number of nodes with available pods: 0 Dec 23 13:29:55.985: INFO: Node iruya-node is running more than one daemon pod Dec 23 13:29:56.912: INFO: Number of nodes with available pods: 0 Dec 23 13:29:56.912: INFO: Node iruya-node is running more than one daemon pod Dec 23 13:29:57.676: INFO: Number of nodes with available pods: 0 Dec 23 13:29:57.676: INFO: Node iruya-node is running more than one daemon pod Dec 23 13:29:58.704: INFO: Number of nodes with available pods: 0 Dec 23 13:29:58.704: INFO: Node iruya-node is running more than one daemon pod Dec 23 13:30:00.879: INFO: Number of nodes with available pods: 0 Dec 23 13:30:00.879: INFO: Node iruya-node is running more than one daemon pod Dec 23 13:30:01.687: INFO: Number of nodes with available pods: 0 Dec 23 13:30:01.687: INFO: Node iruya-node is running more than one daemon pod Dec 23 13:30:02.687: INFO: Number of nodes with available pods: 0 Dec 23 13:30:02.687: INFO: Node iruya-node is running more than one daemon pod Dec 23 13:30:03.681: INFO: Number of nodes with available pods: 0 Dec 23 13:30:03.681: INFO: Node iruya-node is running more than one daemon pod Dec 23 13:30:04.724: INFO: Number of nodes with available pods: 2 Dec 23 13:30:04.724: INFO: Number of running nodes: 2, number of available pods: 2 Dec 23 13:30:04.724: INFO: Update the DaemonSet to trigger a rollout Dec 23 13:30:04.744: INFO: Updating DaemonSet daemon-set Dec 23 13:30:16.851: INFO: Roll back the DaemonSet before rollout is complete Dec 23 13:30:16.868: INFO: Updating DaemonSet daemon-set Dec 23 13:30:16.869: INFO: Make sure DaemonSet rollback is complete Dec 23 13:30:16.914: INFO: Wrong image for pod: daemon-set-gvcd7. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Dec 23 13:30:16.914: INFO: Pod daemon-set-gvcd7 is not available Dec 23 13:30:18.037: INFO: Wrong image for pod: daemon-set-gvcd7. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Dec 23 13:30:18.037: INFO: Pod daemon-set-gvcd7 is not available Dec 23 13:30:19.040: INFO: Wrong image for pod: daemon-set-gvcd7. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Dec 23 13:30:19.040: INFO: Pod daemon-set-gvcd7 is not available Dec 23 13:30:20.035: INFO: Wrong image for pod: daemon-set-gvcd7. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Dec 23 13:30:20.035: INFO: Pod daemon-set-gvcd7 is not available Dec 23 13:30:21.036: INFO: Wrong image for pod: daemon-set-gvcd7. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Dec 23 13:30:21.036: INFO: Pod daemon-set-gvcd7 is not available Dec 23 13:30:22.053: INFO: Pod daemon-set-67c8v is not available [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9410, will wait for the garbage collector to delete the pods Dec 23 13:30:22.312: INFO: Deleting DaemonSet.extensions daemon-set took: 40.755616ms Dec 23 13:30:22.613: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.755454ms Dec 23 13:30:29.162: INFO: Number of nodes with available pods: 0 Dec 23 13:30:29.162: INFO: Number of running nodes: 0, number of available pods: 0 Dec 23 13:30:29.168: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9410/daemonsets","resourceVersion":"17766173"},"items":null} Dec 23 13:30:29.171: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9410/pods","resourceVersion":"17766173"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:30:29.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9410" for this suite. Dec 23 13:30:35.211: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:30:35.304: INFO: namespace daemonsets-9410 deletion completed in 6.119079131s • [SLOW TEST:41.790 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:30:35.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Dec 23 13:33:37.739: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:33:37.814: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:33:39.815: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:33:39.828: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:33:41.815: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:33:41.828: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:33:43.815: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:33:43.831: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:33:45.815: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:33:45.828: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:33:47.815: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:33:47.828: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:33:49.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:33:49.830: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:33:51.815: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:33:51.837: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:33:53.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:33:53.831: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:33:55.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:33:55.832: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:33:57.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:33:57.834: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:33:59.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:33:59.842: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:34:01.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:34:01.860: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:34:03.815: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:34:03.827: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:34:05.815: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:34:05.826: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:34:07.815: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:34:07.830: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:34:09.815: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:34:09.829: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:34:11.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:34:11.833: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:34:13.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:34:13.829: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:34:15.815: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:34:15.829: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:34:17.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:34:17.828: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:34:19.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:34:19.843: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:34:21.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:34:21.828: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:34:23.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:34:23.829: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:34:25.815: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:34:25.829: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:34:27.815: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:34:27.827: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:34:29.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:34:29.831: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:34:31.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:34:31.830: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:34:33.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:34:33.831: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:34:35.815: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:34:35.826: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:34:37.815: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:34:37.869: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:34:39.815: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:34:39.835: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:34:41.815: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:34:41.830: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:34:43.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:34:43.830: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:34:45.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:34:45.870: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:34:47.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:34:47.829: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:34:49.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:34:49.833: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:34:51.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:34:51.845: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:34:53.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:34:53.827: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:34:55.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:34:55.835: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:34:57.815: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:34:57.832: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:34:59.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:34:59.831: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:35:01.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:35:01.830: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:35:03.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:35:03.827: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:35:05.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:35:05.829: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:35:07.815: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:35:07.830: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:35:09.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:35:09.828: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:35:11.815: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:35:11.828: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:35:13.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:35:13.830: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:35:15.815: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:35:15.827: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:35:17.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:35:17.826: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:35:19.815: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:35:19.827: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:35:21.815: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:35:21.832: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:35:23.815: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:35:23.828: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:35:25.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:35:25.837: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 13:35:27.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 13:35:27.831: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:35:27.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8128" for this suite. Dec 23 13:35:49.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:35:50.072: INFO: namespace container-lifecycle-hook-8128 deletion completed in 22.232220193s • [SLOW TEST:314.767 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:35:50.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-55bae4e0-d502-4d0f-9861-1bff1e022982 in namespace container-probe-5828 Dec 23 13:36:00.246: INFO: Started pod busybox-55bae4e0-d502-4d0f-9861-1bff1e022982 in namespace container-probe-5828 STEP: checking the pod's current state and verifying that restartCount is present Dec 23 13:36:00.251: INFO: Initial restart count of pod busybox-55bae4e0-d502-4d0f-9861-1bff1e022982 is 0 Dec 23 13:36:52.620: INFO: Restart count of pod container-probe-5828/busybox-55bae4e0-d502-4d0f-9861-1bff1e022982 is now 1 (52.368779435s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:36:52.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5828" for this suite. Dec 23 13:36:58.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:36:58.972: INFO: namespace container-probe-5828 deletion completed in 6.202882703s • [SLOW TEST:68.900 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:36:58.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Dec 23 13:37:07.657: INFO: Successfully updated pod "pod-update-activedeadlineseconds-82fc0147-7fac-48ba-98a4-955fcef4604a" Dec 23 13:37:07.657: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-82fc0147-7fac-48ba-98a4-955fcef4604a" in namespace "pods-978" to be "terminated due to deadline exceeded" Dec 23 13:37:07.665: INFO: Pod "pod-update-activedeadlineseconds-82fc0147-7fac-48ba-98a4-955fcef4604a": Phase="Running", Reason="", readiness=true. Elapsed: 7.925703ms Dec 23 13:37:09.679: INFO: Pod "pod-update-activedeadlineseconds-82fc0147-7fac-48ba-98a4-955fcef4604a": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.022135783s Dec 23 13:37:09.680: INFO: Pod "pod-update-activedeadlineseconds-82fc0147-7fac-48ba-98a4-955fcef4604a" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:37:09.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-978" for this suite. Dec 23 13:37:15.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:37:15.924: INFO: namespace pods-978 deletion completed in 6.233773618s • [SLOW TEST:16.951 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:37:15.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Dec 23 13:37:16.089: INFO: PodSpec: initContainers in spec.initContainers Dec 23 13:38:16.338: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-08033638-84ed-4e0f-9728-276ef8b1b799", GenerateName:"", Namespace:"init-container-6362", SelfLink:"/api/v1/namespaces/init-container-6362/pods/pod-init-08033638-84ed-4e0f-9728-276ef8b1b799", UID:"55b928af-eea7-4ea3-8eb9-f2602cf3ca4b", ResourceVersion:"17766960", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63712705036, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"89235659"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-m6cj4", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002e40100), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-m6cj4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-m6cj4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-m6cj4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003284088), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001f8c000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003284110)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003284130)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003284138), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00328413c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712705036, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712705036, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712705036, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712705036, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc00171e120), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002128070)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0021280e0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://dc7a0b0927693811031e05942ae90f28ea9670f30b97e0195a258c773a7eb9cc"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00171e1a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00171e180), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:38:16.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6362" for this suite. Dec 23 13:38:38.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:38:38.539: INFO: namespace init-container-6362 deletion completed in 22.167243153s • [SLOW TEST:82.614 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:38:38.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Dec 23 13:38:38.760: INFO: Waiting up to 5m0s for pod "downward-api-e0d4e3f9-0ef1-4a5f-85a6-2774f08204ce" in namespace "downward-api-5662" to be "success or failure" Dec 23 13:38:38.784: INFO: Pod "downward-api-e0d4e3f9-0ef1-4a5f-85a6-2774f08204ce": Phase="Pending", Reason="", readiness=false. Elapsed: 23.611479ms Dec 23 13:38:40.793: INFO: Pod "downward-api-e0d4e3f9-0ef1-4a5f-85a6-2774f08204ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032705642s Dec 23 13:38:42.802: INFO: Pod "downward-api-e0d4e3f9-0ef1-4a5f-85a6-2774f08204ce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041615636s Dec 23 13:38:44.815: INFO: Pod "downward-api-e0d4e3f9-0ef1-4a5f-85a6-2774f08204ce": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054682141s Dec 23 13:38:46.833: INFO: Pod "downward-api-e0d4e3f9-0ef1-4a5f-85a6-2774f08204ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.071942427s STEP: Saw pod success Dec 23 13:38:46.833: INFO: Pod "downward-api-e0d4e3f9-0ef1-4a5f-85a6-2774f08204ce" satisfied condition "success or failure" Dec 23 13:38:46.838: INFO: Trying to get logs from node iruya-node pod downward-api-e0d4e3f9-0ef1-4a5f-85a6-2774f08204ce container dapi-container: STEP: delete the pod Dec 23 13:38:47.119: INFO: Waiting for pod downward-api-e0d4e3f9-0ef1-4a5f-85a6-2774f08204ce to disappear Dec 23 13:38:47.146: INFO: Pod downward-api-e0d4e3f9-0ef1-4a5f-85a6-2774f08204ce no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:38:47.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5662" for this suite. Dec 23 13:38:53.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:38:53.484: INFO: namespace downward-api-5662 deletion completed in 6.219943944s • [SLOW TEST:14.941 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:38:53.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-9619 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9619 to expose endpoints map[] Dec 23 13:38:53.735: INFO: successfully validated that service multi-endpoint-test in namespace services-9619 exposes endpoints map[] (28.74489ms elapsed) STEP: Creating pod pod1 in namespace services-9619 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9619 to expose endpoints map[pod1:[100]] Dec 23 13:38:57.966: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.19012295s elapsed, will retry) Dec 23 13:39:03.025: INFO: successfully validated that service multi-endpoint-test in namespace services-9619 exposes endpoints map[pod1:[100]] (9.248574684s elapsed) STEP: Creating pod pod2 in namespace services-9619 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9619 to expose endpoints map[pod1:[100] pod2:[101]] Dec 23 13:39:07.745: INFO: Unexpected endpoints: found map[afbca315-1233-446e-9d51-519749d4a224:[100]], expected map[pod1:[100] pod2:[101]] (4.714641578s elapsed, will retry) Dec 23 13:39:09.899: INFO: successfully validated that service multi-endpoint-test in namespace services-9619 exposes endpoints map[pod1:[100] pod2:[101]] (6.869273741s elapsed) STEP: Deleting pod pod1 in namespace services-9619 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9619 to expose endpoints map[pod2:[101]] Dec 23 13:39:09.978: INFO: successfully validated that service multi-endpoint-test in namespace services-9619 exposes endpoints map[pod2:[101]] (61.026514ms elapsed) STEP: Deleting pod pod2 in namespace services-9619 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9619 to expose endpoints map[] Dec 23 13:39:11.050: INFO: successfully validated that service multi-endpoint-test in namespace services-9619 exposes endpoints map[] (1.020155392s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:39:11.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9619" for this suite. Dec 23 13:39:33.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:39:33.393: INFO: namespace services-9619 deletion completed in 22.17558476s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:39.909 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:39:33.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-2482 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Dec 23 13:39:33.653: INFO: Found 0 stateful pods, waiting for 3 Dec 23 13:39:43.671: INFO: Found 2 stateful pods, waiting for 3 Dec 23 13:39:53.672: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 23 13:39:53.672: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 23 13:39:53.672: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Dec 23 13:40:03.671: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 23 13:40:03.671: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 23 13:40:03.671: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Dec 23 13:40:03.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2482 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 23 13:40:06.369: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 23 13:40:06.370: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 23 13:40:06.370: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Dec 23 13:40:16.434: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Dec 23 13:40:26.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2482 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 13:40:26.981: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Dec 23 13:40:26.981: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 23 13:40:26.981: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 23 13:40:37.016: INFO: Waiting for StatefulSet statefulset-2482/ss2 to complete update Dec 23 13:40:37.016: INFO: Waiting for Pod statefulset-2482/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 23 13:40:37.016: INFO: Waiting for Pod statefulset-2482/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 23 13:40:47.032: INFO: Waiting for StatefulSet statefulset-2482/ss2 to complete update Dec 23 13:40:47.032: INFO: Waiting for Pod statefulset-2482/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 23 13:40:47.032: INFO: Waiting for Pod statefulset-2482/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 23 13:40:57.043: INFO: Waiting for StatefulSet statefulset-2482/ss2 to complete update Dec 23 13:40:57.043: INFO: Waiting for Pod statefulset-2482/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 23 13:41:07.025: INFO: Waiting for StatefulSet statefulset-2482/ss2 to complete update STEP: Rolling back to a previous revision Dec 23 13:41:17.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2482 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 23 13:41:17.573: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 23 13:41:17.573: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 23 13:41:17.573: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 23 13:41:27.631: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Dec 23 13:41:37.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2482 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 23 13:41:38.144: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Dec 23 13:41:38.144: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 23 13:41:38.144: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 23 13:41:48.301: INFO: Waiting for StatefulSet statefulset-2482/ss2 to complete update Dec 23 13:41:48.301: INFO: Waiting for Pod statefulset-2482/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 23 13:41:48.301: INFO: Waiting for Pod statefulset-2482/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 23 13:41:58.320: INFO: Waiting for StatefulSet statefulset-2482/ss2 to complete update Dec 23 13:41:58.320: INFO: Waiting for Pod statefulset-2482/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 23 13:41:58.320: INFO: Waiting for Pod statefulset-2482/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 23 13:42:08.331: INFO: Waiting for StatefulSet statefulset-2482/ss2 to complete update Dec 23 13:42:08.332: INFO: Waiting for Pod statefulset-2482/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 23 13:42:18.322: INFO: Waiting for StatefulSet statefulset-2482/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Dec 23 13:42:28.324: INFO: Deleting all statefulset in ns statefulset-2482 Dec 23 13:42:28.329: INFO: Scaling statefulset ss2 to 0 Dec 23 13:42:58.399: INFO: Waiting for statefulset status.replicas updated to 0 Dec 23 13:42:58.405: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:42:58.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2482" for this suite. Dec 23 13:43:06.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:43:06.571: INFO: namespace statefulset-2482 deletion completed in 8.138522586s • [SLOW TEST:213.178 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:43:06.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 23 13:43:06.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-8316' Dec 23 13:43:06.875: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 23 13:43:06.875: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Dec 23 13:43:11.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-8316' Dec 23 13:43:12.073: INFO: stderr: "" Dec 23 13:43:12.073: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:43:12.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8316" for this suite. Dec 23 13:43:18.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:43:18.248: INFO: namespace kubectl-8316 deletion completed in 6.16843001s • [SLOW TEST:11.676 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:43:18.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 23 13:43:18.384: INFO: Waiting up to 5m0s for pod "downwardapi-volume-408079cd-304d-4290-b460-6f511d5d29c3" in namespace "downward-api-2948" to be "success or failure" Dec 23 13:43:18.393: INFO: Pod "downwardapi-volume-408079cd-304d-4290-b460-6f511d5d29c3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.593191ms Dec 23 13:43:20.402: INFO: Pod "downwardapi-volume-408079cd-304d-4290-b460-6f511d5d29c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017873849s Dec 23 13:43:22.415: INFO: Pod "downwardapi-volume-408079cd-304d-4290-b460-6f511d5d29c3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03081932s Dec 23 13:43:24.426: INFO: Pod "downwardapi-volume-408079cd-304d-4290-b460-6f511d5d29c3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041945162s Dec 23 13:43:26.438: INFO: Pod "downwardapi-volume-408079cd-304d-4290-b460-6f511d5d29c3": Phase="Running", Reason="", readiness=true. Elapsed: 8.053915346s Dec 23 13:43:28.450: INFO: Pod "downwardapi-volume-408079cd-304d-4290-b460-6f511d5d29c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.065810143s STEP: Saw pod success Dec 23 13:43:28.450: INFO: Pod "downwardapi-volume-408079cd-304d-4290-b460-6f511d5d29c3" satisfied condition "success or failure" Dec 23 13:43:28.455: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-408079cd-304d-4290-b460-6f511d5d29c3 container client-container: STEP: delete the pod Dec 23 13:43:28.569: INFO: Waiting for pod downwardapi-volume-408079cd-304d-4290-b460-6f511d5d29c3 to disappear Dec 23 13:43:28.576: INFO: Pod downwardapi-volume-408079cd-304d-4290-b460-6f511d5d29c3 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:43:28.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2948" for this suite. Dec 23 13:43:34.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:43:34.747: INFO: namespace downward-api-2948 deletion completed in 6.162656906s • [SLOW TEST:16.498 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:43:34.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-a9170449-c13e-42ea-90ea-ccb1b8fed725 STEP: Creating a pod to test consume secrets Dec 23 13:43:34.902: INFO: Waiting up to 5m0s for pod "pod-secrets-a9b97b5a-cf41-4e28-828f-715f421d7213" in namespace "secrets-865" to be "success or failure" Dec 23 13:43:34.918: INFO: Pod "pod-secrets-a9b97b5a-cf41-4e28-828f-715f421d7213": Phase="Pending", Reason="", readiness=false. Elapsed: 15.615015ms Dec 23 13:43:36.927: INFO: Pod "pod-secrets-a9b97b5a-cf41-4e28-828f-715f421d7213": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024292923s Dec 23 13:43:38.935: INFO: Pod "pod-secrets-a9b97b5a-cf41-4e28-828f-715f421d7213": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032804684s Dec 23 13:43:41.165: INFO: Pod "pod-secrets-a9b97b5a-cf41-4e28-828f-715f421d7213": Phase="Pending", Reason="", readiness=false. Elapsed: 6.262074862s Dec 23 13:43:43.173: INFO: Pod "pod-secrets-a9b97b5a-cf41-4e28-828f-715f421d7213": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.27018619s STEP: Saw pod success Dec 23 13:43:43.173: INFO: Pod "pod-secrets-a9b97b5a-cf41-4e28-828f-715f421d7213" satisfied condition "success or failure" Dec 23 13:43:43.175: INFO: Trying to get logs from node iruya-node pod pod-secrets-a9b97b5a-cf41-4e28-828f-715f421d7213 container secret-volume-test: STEP: delete the pod Dec 23 13:43:43.252: INFO: Waiting for pod pod-secrets-a9b97b5a-cf41-4e28-828f-715f421d7213 to disappear Dec 23 13:43:43.282: INFO: Pod pod-secrets-a9b97b5a-cf41-4e28-828f-715f421d7213 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:43:43.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-865" for this suite. Dec 23 13:43:49.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:43:49.475: INFO: namespace secrets-865 deletion completed in 6.189825584s • [SLOW TEST:14.728 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:43:49.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Dec 23 13:43:57.673: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:43:57.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6502" for this suite. Dec 23 13:44:03.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:44:04.065: INFO: namespace container-runtime-6502 deletion completed in 6.281848994s • [SLOW TEST:14.590 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:44:04.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Dec 23 13:44:04.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1145' Dec 23 13:44:04.740: INFO: stderr: "" Dec 23 13:44:04.740: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 23 13:44:04.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1145' Dec 23 13:44:04.947: INFO: stderr: "" Dec 23 13:44:04.947: INFO: stdout: "update-demo-nautilus-fnwdt update-demo-nautilus-rmlw2 " Dec 23 13:44:04.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fnwdt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1145' Dec 23 13:44:05.099: INFO: stderr: "" Dec 23 13:44:05.099: INFO: stdout: "" Dec 23 13:44:05.099: INFO: update-demo-nautilus-fnwdt is created but not running Dec 23 13:44:10.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1145' Dec 23 13:44:10.229: INFO: stderr: "" Dec 23 13:44:10.229: INFO: stdout: "update-demo-nautilus-fnwdt update-demo-nautilus-rmlw2 " Dec 23 13:44:10.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fnwdt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1145' Dec 23 13:44:11.470: INFO: stderr: "" Dec 23 13:44:11.470: INFO: stdout: "" Dec 23 13:44:11.470: INFO: update-demo-nautilus-fnwdt is created but not running Dec 23 13:44:16.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1145' Dec 23 13:44:16.654: INFO: stderr: "" Dec 23 13:44:16.654: INFO: stdout: "update-demo-nautilus-fnwdt update-demo-nautilus-rmlw2 " Dec 23 13:44:16.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fnwdt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1145' Dec 23 13:44:16.761: INFO: stderr: "" Dec 23 13:44:16.761: INFO: stdout: "true" Dec 23 13:44:16.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fnwdt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1145' Dec 23 13:44:16.909: INFO: stderr: "" Dec 23 13:44:16.909: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 23 13:44:16.909: INFO: validating pod update-demo-nautilus-fnwdt Dec 23 13:44:16.958: INFO: got data: { "image": "nautilus.jpg" } Dec 23 13:44:16.958: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 23 13:44:16.958: INFO: update-demo-nautilus-fnwdt is verified up and running Dec 23 13:44:16.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rmlw2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1145' Dec 23 13:44:17.080: INFO: stderr: "" Dec 23 13:44:17.080: INFO: stdout: "true" Dec 23 13:44:17.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rmlw2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1145' Dec 23 13:44:17.202: INFO: stderr: "" Dec 23 13:44:17.202: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 23 13:44:17.202: INFO: validating pod update-demo-nautilus-rmlw2 Dec 23 13:44:17.227: INFO: got data: { "image": "nautilus.jpg" } Dec 23 13:44:17.227: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 23 13:44:17.227: INFO: update-demo-nautilus-rmlw2 is verified up and running STEP: using delete to clean up resources Dec 23 13:44:17.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1145' Dec 23 13:44:17.333: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 23 13:44:17.333: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Dec 23 13:44:17.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1145' Dec 23 13:44:17.477: INFO: stderr: "No resources found.\n" Dec 23 13:44:17.478: INFO: stdout: "" Dec 23 13:44:17.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1145 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 23 13:44:17.738: INFO: stderr: "" Dec 23 13:44:17.738: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:44:17.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1145" for this suite. Dec 23 13:44:39.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:44:39.999: INFO: namespace kubectl-1145 deletion completed in 22.22375285s • [SLOW TEST:35.932 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:44:40.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Dec 23 13:44:40.089: INFO: Waiting up to 5m0s for pod "client-containers-8b818671-8898-4bc6-916b-fa79d214ce6f" in namespace "containers-7734" to be "success or failure" Dec 23 13:44:40.137: INFO: Pod "client-containers-8b818671-8898-4bc6-916b-fa79d214ce6f": Phase="Pending", Reason="", readiness=false. Elapsed: 47.481606ms Dec 23 13:44:42.231: INFO: Pod "client-containers-8b818671-8898-4bc6-916b-fa79d214ce6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.141496347s Dec 23 13:44:44.252: INFO: Pod "client-containers-8b818671-8898-4bc6-916b-fa79d214ce6f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.162863341s Dec 23 13:44:46.264: INFO: Pod "client-containers-8b818671-8898-4bc6-916b-fa79d214ce6f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.174971337s Dec 23 13:44:48.278: INFO: Pod "client-containers-8b818671-8898-4bc6-916b-fa79d214ce6f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.188161726s Dec 23 13:44:50.286: INFO: Pod "client-containers-8b818671-8898-4bc6-916b-fa79d214ce6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.196919738s STEP: Saw pod success Dec 23 13:44:50.287: INFO: Pod "client-containers-8b818671-8898-4bc6-916b-fa79d214ce6f" satisfied condition "success or failure" Dec 23 13:44:50.291: INFO: Trying to get logs from node iruya-node pod client-containers-8b818671-8898-4bc6-916b-fa79d214ce6f container test-container: STEP: delete the pod Dec 23 13:44:50.415: INFO: Waiting for pod client-containers-8b818671-8898-4bc6-916b-fa79d214ce6f to disappear Dec 23 13:44:50.422: INFO: Pod client-containers-8b818671-8898-4bc6-916b-fa79d214ce6f no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:44:50.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7734" for this suite. Dec 23 13:44:56.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:44:56.677: INFO: namespace containers-7734 deletion completed in 6.248308632s • [SLOW TEST:16.677 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:44:56.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 23 13:44:56.871: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b0c65e5f-7002-43c9-9c99-f33b5e3caae9" in namespace "downward-api-285" to be "success or failure" Dec 23 13:44:56.907: INFO: Pod "downwardapi-volume-b0c65e5f-7002-43c9-9c99-f33b5e3caae9": Phase="Pending", Reason="", readiness=false. Elapsed: 35.268881ms Dec 23 13:44:58.917: INFO: Pod "downwardapi-volume-b0c65e5f-7002-43c9-9c99-f33b5e3caae9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045368087s Dec 23 13:45:00.928: INFO: Pod "downwardapi-volume-b0c65e5f-7002-43c9-9c99-f33b5e3caae9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056155697s Dec 23 13:45:02.975: INFO: Pod "downwardapi-volume-b0c65e5f-7002-43c9-9c99-f33b5e3caae9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102964175s Dec 23 13:45:04.986: INFO: Pod "downwardapi-volume-b0c65e5f-7002-43c9-9c99-f33b5e3caae9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.114614604s STEP: Saw pod success Dec 23 13:45:04.986: INFO: Pod "downwardapi-volume-b0c65e5f-7002-43c9-9c99-f33b5e3caae9" satisfied condition "success or failure" Dec 23 13:45:04.989: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-b0c65e5f-7002-43c9-9c99-f33b5e3caae9 container client-container: STEP: delete the pod Dec 23 13:45:05.125: INFO: Waiting for pod downwardapi-volume-b0c65e5f-7002-43c9-9c99-f33b5e3caae9 to disappear Dec 23 13:45:05.139: INFO: Pod downwardapi-volume-b0c65e5f-7002-43c9-9c99-f33b5e3caae9 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:45:05.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-285" for this suite. Dec 23 13:45:11.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:45:11.357: INFO: namespace downward-api-285 deletion completed in 6.200688985s • [SLOW TEST:14.678 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:45:11.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Dec 23 13:45:11.501: INFO: Waiting up to 5m0s for pod "pod-5ae31e01-55fd-4549-ac4d-130f8238ff78" in namespace "emptydir-1367" to be "success or failure" Dec 23 13:45:11.551: INFO: Pod "pod-5ae31e01-55fd-4549-ac4d-130f8238ff78": Phase="Pending", Reason="", readiness=false. Elapsed: 49.675375ms Dec 23 13:45:13.561: INFO: Pod "pod-5ae31e01-55fd-4549-ac4d-130f8238ff78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060195203s Dec 23 13:45:15.578: INFO: Pod "pod-5ae31e01-55fd-4549-ac4d-130f8238ff78": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07693739s Dec 23 13:45:17.587: INFO: Pod "pod-5ae31e01-55fd-4549-ac4d-130f8238ff78": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085391141s Dec 23 13:45:19.597: INFO: Pod "pod-5ae31e01-55fd-4549-ac4d-130f8238ff78": Phase="Pending", Reason="", readiness=false. Elapsed: 8.095812393s Dec 23 13:45:21.607: INFO: Pod "pod-5ae31e01-55fd-4549-ac4d-130f8238ff78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.105903122s STEP: Saw pod success Dec 23 13:45:21.607: INFO: Pod "pod-5ae31e01-55fd-4549-ac4d-130f8238ff78" satisfied condition "success or failure" Dec 23 13:45:21.612: INFO: Trying to get logs from node iruya-node pod pod-5ae31e01-55fd-4549-ac4d-130f8238ff78 container test-container: STEP: delete the pod Dec 23 13:45:21.723: INFO: Waiting for pod pod-5ae31e01-55fd-4549-ac4d-130f8238ff78 to disappear Dec 23 13:45:21.734: INFO: Pod pod-5ae31e01-55fd-4549-ac4d-130f8238ff78 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:45:21.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1367" for this suite. Dec 23 13:45:27.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:45:28.004: INFO: namespace emptydir-1367 deletion completed in 6.259558748s • [SLOW TEST:16.645 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:45:28.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 23 13:45:28.101: INFO: Waiting up to 5m0s for pod "downwardapi-volume-23f9c1db-7d4d-4be0-9af6-ac5b1a3f4270" in namespace "projected-7616" to be "success or failure" Dec 23 13:45:28.126: INFO: Pod "downwardapi-volume-23f9c1db-7d4d-4be0-9af6-ac5b1a3f4270": Phase="Pending", Reason="", readiness=false. Elapsed: 25.041454ms Dec 23 13:45:30.145: INFO: Pod "downwardapi-volume-23f9c1db-7d4d-4be0-9af6-ac5b1a3f4270": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043923861s Dec 23 13:45:32.160: INFO: Pod "downwardapi-volume-23f9c1db-7d4d-4be0-9af6-ac5b1a3f4270": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059446986s Dec 23 13:45:34.174: INFO: Pod "downwardapi-volume-23f9c1db-7d4d-4be0-9af6-ac5b1a3f4270": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072766873s Dec 23 13:45:36.182: INFO: Pod "downwardapi-volume-23f9c1db-7d4d-4be0-9af6-ac5b1a3f4270": Phase="Pending", Reason="", readiness=false. Elapsed: 8.081113188s Dec 23 13:45:38.191: INFO: Pod "downwardapi-volume-23f9c1db-7d4d-4be0-9af6-ac5b1a3f4270": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.090364147s STEP: Saw pod success Dec 23 13:45:38.192: INFO: Pod "downwardapi-volume-23f9c1db-7d4d-4be0-9af6-ac5b1a3f4270" satisfied condition "success or failure" Dec 23 13:45:38.196: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-23f9c1db-7d4d-4be0-9af6-ac5b1a3f4270 container client-container: STEP: delete the pod Dec 23 13:45:38.276: INFO: Waiting for pod downwardapi-volume-23f9c1db-7d4d-4be0-9af6-ac5b1a3f4270 to disappear Dec 23 13:45:38.338: INFO: Pod downwardapi-volume-23f9c1db-7d4d-4be0-9af6-ac5b1a3f4270 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:45:38.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7616" for this suite. Dec 23 13:45:44.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:45:44.528: INFO: namespace projected-7616 deletion completed in 6.176860646s • [SLOW TEST:16.524 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:45:44.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Dec 23 13:45:45.056: INFO: Waiting up to 5m0s for pod "pod-b6edacb8-e336-4c36-93a5-13fd7fe2225d" in namespace "emptydir-4472" to be "success or failure" Dec 23 13:45:45.077: INFO: Pod "pod-b6edacb8-e336-4c36-93a5-13fd7fe2225d": Phase="Pending", Reason="", readiness=false. Elapsed: 20.614883ms Dec 23 13:45:47.094: INFO: Pod "pod-b6edacb8-e336-4c36-93a5-13fd7fe2225d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037698432s Dec 23 13:45:49.103: INFO: Pod "pod-b6edacb8-e336-4c36-93a5-13fd7fe2225d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047329607s Dec 23 13:45:51.124: INFO: Pod "pod-b6edacb8-e336-4c36-93a5-13fd7fe2225d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067509154s Dec 23 13:45:53.135: INFO: Pod "pod-b6edacb8-e336-4c36-93a5-13fd7fe2225d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.079110721s STEP: Saw pod success Dec 23 13:45:53.135: INFO: Pod "pod-b6edacb8-e336-4c36-93a5-13fd7fe2225d" satisfied condition "success or failure" Dec 23 13:45:53.140: INFO: Trying to get logs from node iruya-node pod pod-b6edacb8-e336-4c36-93a5-13fd7fe2225d container test-container: STEP: delete the pod Dec 23 13:45:53.215: INFO: Waiting for pod pod-b6edacb8-e336-4c36-93a5-13fd7fe2225d to disappear Dec 23 13:45:53.230: INFO: Pod pod-b6edacb8-e336-4c36-93a5-13fd7fe2225d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:45:53.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4472" for this suite. Dec 23 13:45:59.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:45:59.434: INFO: namespace emptydir-4472 deletion completed in 6.187242106s • [SLOW TEST:14.902 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:45:59.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Dec 23 13:46:08.122: INFO: Successfully updated pod "pod-update-aa9b842b-3870-4b08-836c-5837e1c8cb02" STEP: verifying the updated pod is in kubernetes Dec 23 13:46:08.171: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:46:08.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7713" for this suite. Dec 23 13:46:30.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:46:30.363: INFO: namespace pods-7713 deletion completed in 22.184099927s • [SLOW TEST:30.928 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:46:30.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:46:40.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-568" for this suite. Dec 23 13:47:22.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:47:22.810: INFO: namespace kubelet-test-568 deletion completed in 42.212917167s • [SLOW TEST:52.445 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:47:22.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-bdc90897-4d49-45b1-a18f-fe600026cae3 STEP: Creating a pod to test consume configMaps Dec 23 13:47:22.953: INFO: Waiting up to 5m0s for pod "pod-configmaps-532be278-7972-4613-9738-73de140f574f" in namespace "configmap-5809" to be "success or failure" Dec 23 13:47:22.991: INFO: Pod "pod-configmaps-532be278-7972-4613-9738-73de140f574f": Phase="Pending", Reason="", readiness=false. Elapsed: 37.842809ms Dec 23 13:47:25.000: INFO: Pod "pod-configmaps-532be278-7972-4613-9738-73de140f574f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046325615s Dec 23 13:47:27.015: INFO: Pod "pod-configmaps-532be278-7972-4613-9738-73de140f574f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060887104s Dec 23 13:47:29.035: INFO: Pod "pod-configmaps-532be278-7972-4613-9738-73de140f574f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081034477s Dec 23 13:47:31.076: INFO: Pod "pod-configmaps-532be278-7972-4613-9738-73de140f574f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.122433514s Dec 23 13:47:33.088: INFO: Pod "pod-configmaps-532be278-7972-4613-9738-73de140f574f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.134850302s STEP: Saw pod success Dec 23 13:47:33.089: INFO: Pod "pod-configmaps-532be278-7972-4613-9738-73de140f574f" satisfied condition "success or failure" Dec 23 13:47:33.093: INFO: Trying to get logs from node iruya-node pod pod-configmaps-532be278-7972-4613-9738-73de140f574f container configmap-volume-test: STEP: delete the pod Dec 23 13:47:33.188: INFO: Waiting for pod pod-configmaps-532be278-7972-4613-9738-73de140f574f to disappear Dec 23 13:47:33.197: INFO: Pod pod-configmaps-532be278-7972-4613-9738-73de140f574f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:47:33.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5809" for this suite. Dec 23 13:47:39.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:47:39.484: INFO: namespace configmap-5809 deletion completed in 6.274980384s • [SLOW TEST:16.674 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:47:39.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-f51d160f-640d-4c0a-aa0c-8c9b21d1b57a STEP: Creating configMap with name cm-test-opt-upd-f1efa07a-e2d6-4a73-831a-3233aaa7998f STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-f51d160f-640d-4c0a-aa0c-8c9b21d1b57a STEP: Updating configmap cm-test-opt-upd-f1efa07a-e2d6-4a73-831a-3233aaa7998f STEP: Creating configMap with name cm-test-opt-create-33b2ce0d-5974-4bee-83c9-7dc6a4715462 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:49:03.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-762" for this suite. Dec 23 13:49:25.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:49:26.106: INFO: namespace configmap-762 deletion completed in 22.139754964s • [SLOW TEST:106.621 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:49:26.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Dec 23 13:49:26.265: INFO: Waiting up to 5m0s for pod "downward-api-a6d300e7-16ab-4030-9f36-124105bcb5fc" in namespace "downward-api-1526" to be "success or failure" Dec 23 13:49:26.273: INFO: Pod "downward-api-a6d300e7-16ab-4030-9f36-124105bcb5fc": Phase="Pending", Reason="", readiness=false. Elapsed: 7.633868ms Dec 23 13:49:28.283: INFO: Pod "downward-api-a6d300e7-16ab-4030-9f36-124105bcb5fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017682807s Dec 23 13:49:30.301: INFO: Pod "downward-api-a6d300e7-16ab-4030-9f36-124105bcb5fc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036038819s Dec 23 13:49:32.376: INFO: Pod "downward-api-a6d300e7-16ab-4030-9f36-124105bcb5fc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.111314991s Dec 23 13:49:34.390: INFO: Pod "downward-api-a6d300e7-16ab-4030-9f36-124105bcb5fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.125318538s STEP: Saw pod success Dec 23 13:49:34.391: INFO: Pod "downward-api-a6d300e7-16ab-4030-9f36-124105bcb5fc" satisfied condition "success or failure" Dec 23 13:49:34.395: INFO: Trying to get logs from node iruya-node pod downward-api-a6d300e7-16ab-4030-9f36-124105bcb5fc container dapi-container: STEP: delete the pod Dec 23 13:49:34.502: INFO: Waiting for pod downward-api-a6d300e7-16ab-4030-9f36-124105bcb5fc to disappear Dec 23 13:49:34.511: INFO: Pod downward-api-a6d300e7-16ab-4030-9f36-124105bcb5fc no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:49:34.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1526" for this suite. Dec 23 13:49:40.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:49:40.708: INFO: namespace downward-api-1526 deletion completed in 6.190480019s • [SLOW TEST:14.602 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:49:40.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-e828ea5e-ac0f-48b4-ab44-a10fd23ec709 STEP: Creating a pod to test consume configMaps Dec 23 13:49:40.832: INFO: Waiting up to 5m0s for pod "pod-configmaps-5fbdd046-0d9d-4057-ad7f-dedc524c7821" in namespace "configmap-4805" to be "success or failure" Dec 23 13:49:40.835: INFO: Pod "pod-configmaps-5fbdd046-0d9d-4057-ad7f-dedc524c7821": Phase="Pending", Reason="", readiness=false. Elapsed: 3.077168ms Dec 23 13:49:42.855: INFO: Pod "pod-configmaps-5fbdd046-0d9d-4057-ad7f-dedc524c7821": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02309027s Dec 23 13:49:44.873: INFO: Pod "pod-configmaps-5fbdd046-0d9d-4057-ad7f-dedc524c7821": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041038101s Dec 23 13:49:46.882: INFO: Pod "pod-configmaps-5fbdd046-0d9d-4057-ad7f-dedc524c7821": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050635112s Dec 23 13:49:49.186: INFO: Pod "pod-configmaps-5fbdd046-0d9d-4057-ad7f-dedc524c7821": Phase="Pending", Reason="", readiness=false. Elapsed: 8.353630305s Dec 23 13:49:51.194: INFO: Pod "pod-configmaps-5fbdd046-0d9d-4057-ad7f-dedc524c7821": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.361786595s STEP: Saw pod success Dec 23 13:49:51.194: INFO: Pod "pod-configmaps-5fbdd046-0d9d-4057-ad7f-dedc524c7821" satisfied condition "success or failure" Dec 23 13:49:51.198: INFO: Trying to get logs from node iruya-node pod pod-configmaps-5fbdd046-0d9d-4057-ad7f-dedc524c7821 container configmap-volume-test: STEP: delete the pod Dec 23 13:49:51.282: INFO: Waiting for pod pod-configmaps-5fbdd046-0d9d-4057-ad7f-dedc524c7821 to disappear Dec 23 13:49:51.292: INFO: Pod pod-configmaps-5fbdd046-0d9d-4057-ad7f-dedc524c7821 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:49:51.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4805" for this suite. Dec 23 13:49:57.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:49:57.568: INFO: namespace configmap-4805 deletion completed in 6.264442209s • [SLOW TEST:16.859 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:49:57.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2685.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-2685.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2685.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2685.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-2685.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2685.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 23 13:50:09.782: INFO: Unable to read wheezy_udp@PodARecord from pod dns-2685/dns-test-1a07c4ec-7174-4f68-a403-0e35d0511508: the server could not find the requested resource (get pods dns-test-1a07c4ec-7174-4f68-a403-0e35d0511508) Dec 23 13:50:09.788: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-2685/dns-test-1a07c4ec-7174-4f68-a403-0e35d0511508: the server could not find the requested resource (get pods dns-test-1a07c4ec-7174-4f68-a403-0e35d0511508) Dec 23 13:50:09.798: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-2685.svc.cluster.local from pod dns-2685/dns-test-1a07c4ec-7174-4f68-a403-0e35d0511508: the server could not find the requested resource (get pods dns-test-1a07c4ec-7174-4f68-a403-0e35d0511508) Dec 23 13:50:09.809: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-2685/dns-test-1a07c4ec-7174-4f68-a403-0e35d0511508: the server could not find the requested resource (get pods dns-test-1a07c4ec-7174-4f68-a403-0e35d0511508) Dec 23 13:50:09.816: INFO: Unable to read jessie_udp@PodARecord from pod dns-2685/dns-test-1a07c4ec-7174-4f68-a403-0e35d0511508: the server could not find the requested resource (get pods dns-test-1a07c4ec-7174-4f68-a403-0e35d0511508) Dec 23 13:50:09.824: INFO: Unable to read jessie_tcp@PodARecord from pod dns-2685/dns-test-1a07c4ec-7174-4f68-a403-0e35d0511508: the server could not find the requested resource (get pods dns-test-1a07c4ec-7174-4f68-a403-0e35d0511508) Dec 23 13:50:09.824: INFO: Lookups using dns-2685/dns-test-1a07c4ec-7174-4f68-a403-0e35d0511508 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-2685.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord] Dec 23 13:50:15.262: INFO: DNS probes using dns-2685/dns-test-1a07c4ec-7174-4f68-a403-0e35d0511508 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:50:15.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2685" for this suite. Dec 23 13:50:21.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:50:21.569: INFO: namespace dns-2685 deletion completed in 6.153910198s • [SLOW TEST:24.000 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:50:21.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-27be8585-b29d-4b62-9ed1-d4562a7b66d4 STEP: Creating a pod to test consume secrets Dec 23 13:50:21.794: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-25814029-3104-4b35-b37b-8b1f0f81408c" in namespace "projected-72" to be "success or failure" Dec 23 13:50:21.827: INFO: Pod "pod-projected-secrets-25814029-3104-4b35-b37b-8b1f0f81408c": Phase="Pending", Reason="", readiness=false. Elapsed: 32.875729ms Dec 23 13:50:23.840: INFO: Pod "pod-projected-secrets-25814029-3104-4b35-b37b-8b1f0f81408c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045486675s Dec 23 13:50:25.850: INFO: Pod "pod-projected-secrets-25814029-3104-4b35-b37b-8b1f0f81408c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05624811s Dec 23 13:50:27.870: INFO: Pod "pod-projected-secrets-25814029-3104-4b35-b37b-8b1f0f81408c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076413978s Dec 23 13:50:29.881: INFO: Pod "pod-projected-secrets-25814029-3104-4b35-b37b-8b1f0f81408c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.086770524s STEP: Saw pod success Dec 23 13:50:29.881: INFO: Pod "pod-projected-secrets-25814029-3104-4b35-b37b-8b1f0f81408c" satisfied condition "success or failure" Dec 23 13:50:29.885: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-25814029-3104-4b35-b37b-8b1f0f81408c container projected-secret-volume-test: STEP: delete the pod Dec 23 13:50:30.104: INFO: Waiting for pod pod-projected-secrets-25814029-3104-4b35-b37b-8b1f0f81408c to disappear Dec 23 13:50:30.110: INFO: Pod pod-projected-secrets-25814029-3104-4b35-b37b-8b1f0f81408c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:50:30.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-72" for this suite. Dec 23 13:50:36.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:50:36.321: INFO: namespace projected-72 deletion completed in 6.199158881s • [SLOW TEST:14.752 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:50:36.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Dec 23 13:50:36.394: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:50:48.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9337" for this suite. Dec 23 13:50:54.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:50:55.110: INFO: namespace init-container-9337 deletion completed in 6.19732752s • [SLOW TEST:18.788 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:50:55.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:50:55.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3134" for this suite. Dec 23 13:51:01.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:51:01.511: INFO: namespace services-3134 deletion completed in 6.22559145s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.401 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:51:01.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Dec 23 13:51:01.647: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Dec 23 13:51:01.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6854' Dec 23 13:51:04.261: INFO: stderr: "" Dec 23 13:51:04.262: INFO: stdout: "service/redis-slave created\n" Dec 23 13:51:04.263: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Dec 23 13:51:04.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6854' Dec 23 13:51:04.853: INFO: stderr: "" Dec 23 13:51:04.853: INFO: stdout: "service/redis-master created\n" Dec 23 13:51:04.854: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Dec 23 13:51:04.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6854' Dec 23 13:51:05.618: INFO: stderr: "" Dec 23 13:51:05.618: INFO: stdout: "service/frontend created\n" Dec 23 13:51:05.620: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Dec 23 13:51:05.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6854' Dec 23 13:51:06.096: INFO: stderr: "" Dec 23 13:51:06.096: INFO: stdout: "deployment.apps/frontend created\n" Dec 23 13:51:06.098: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Dec 23 13:51:06.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6854' Dec 23 13:51:06.734: INFO: stderr: "" Dec 23 13:51:06.735: INFO: stdout: "deployment.apps/redis-master created\n" Dec 23 13:51:06.736: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Dec 23 13:51:06.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6854' Dec 23 13:51:07.789: INFO: stderr: "" Dec 23 13:51:07.789: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Dec 23 13:51:07.789: INFO: Waiting for all frontend pods to be Running. Dec 23 13:51:32.843: INFO: Waiting for frontend to serve content. Dec 23 13:51:32.907: INFO: Trying to add a new entry to the guestbook. Dec 23 13:51:32.955: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Dec 23 13:51:32.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6854' Dec 23 13:51:33.206: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 23 13:51:33.206: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Dec 23 13:51:33.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6854' Dec 23 13:51:33.612: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 23 13:51:33.613: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Dec 23 13:51:33.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6854' Dec 23 13:51:33.761: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 23 13:51:33.762: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Dec 23 13:51:33.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6854' Dec 23 13:51:34.011: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 23 13:51:34.011: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Dec 23 13:51:34.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6854' Dec 23 13:51:34.250: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 23 13:51:34.251: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Dec 23 13:51:34.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6854' Dec 23 13:51:34.470: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 23 13:51:34.471: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:51:34.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6854" for this suite. Dec 23 13:52:20.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:52:20.734: INFO: namespace kubectl-6854 deletion completed in 46.241378714s • [SLOW TEST:79.222 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:52:20.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W1223 13:52:30.907842 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 23 13:52:30.908: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:52:30.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2893" for this suite. Dec 23 13:52:37.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:52:37.530: INFO: namespace gc-2893 deletion completed in 6.615215609s • [SLOW TEST:16.795 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:52:37.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Dec 23 13:52:53.908: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 23 13:52:53.934: INFO: Pod pod-with-poststart-http-hook still exists Dec 23 13:52:55.935: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 23 13:52:55.947: INFO: Pod pod-with-poststart-http-hook still exists Dec 23 13:52:57.935: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 23 13:52:57.945: INFO: Pod pod-with-poststart-http-hook still exists Dec 23 13:52:59.937: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 23 13:52:59.948: INFO: Pod pod-with-poststart-http-hook still exists Dec 23 13:53:01.935: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 23 13:53:02.204: INFO: Pod pod-with-poststart-http-hook still exists Dec 23 13:53:03.935: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 23 13:53:03.948: INFO: Pod pod-with-poststart-http-hook still exists Dec 23 13:53:05.935: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 23 13:53:05.947: INFO: Pod pod-with-poststart-http-hook still exists Dec 23 13:53:07.935: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 23 13:53:07.947: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:53:07.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4330" for this suite. Dec 23 13:53:29.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:53:30.103: INFO: namespace container-lifecycle-hook-4330 deletion completed in 22.147518938s • [SLOW TEST:52.573 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:53:30.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-89644a40-daaa-4922-a45b-29ae2a9e5b7e STEP: Creating a pod to test consume secrets Dec 23 13:53:30.366: INFO: Waiting up to 5m0s for pod "pod-secrets-506b1995-e751-42c9-847c-92f1de7d04f4" in namespace "secrets-9511" to be "success or failure" Dec 23 13:53:30.403: INFO: Pod "pod-secrets-506b1995-e751-42c9-847c-92f1de7d04f4": Phase="Pending", Reason="", readiness=false. Elapsed: 36.958161ms Dec 23 13:53:32.414: INFO: Pod "pod-secrets-506b1995-e751-42c9-847c-92f1de7d04f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048178251s Dec 23 13:53:34.424: INFO: Pod "pod-secrets-506b1995-e751-42c9-847c-92f1de7d04f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058181942s Dec 23 13:53:36.437: INFO: Pod "pod-secrets-506b1995-e751-42c9-847c-92f1de7d04f4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071564675s Dec 23 13:53:38.461: INFO: Pod "pod-secrets-506b1995-e751-42c9-847c-92f1de7d04f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.094868466s STEP: Saw pod success Dec 23 13:53:38.461: INFO: Pod "pod-secrets-506b1995-e751-42c9-847c-92f1de7d04f4" satisfied condition "success or failure" Dec 23 13:53:38.485: INFO: Trying to get logs from node iruya-node pod pod-secrets-506b1995-e751-42c9-847c-92f1de7d04f4 container secret-volume-test: STEP: delete the pod Dec 23 13:53:38.641: INFO: Waiting for pod pod-secrets-506b1995-e751-42c9-847c-92f1de7d04f4 to disappear Dec 23 13:53:38.657: INFO: Pod pod-secrets-506b1995-e751-42c9-847c-92f1de7d04f4 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:53:38.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9511" for this suite. Dec 23 13:53:44.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:53:44.927: INFO: namespace secrets-9511 deletion completed in 6.232208171s STEP: Destroying namespace "secret-namespace-811" for this suite. Dec 23 13:53:50.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:53:51.080: INFO: namespace secret-namespace-811 deletion completed in 6.153142247s • [SLOW TEST:20.975 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:53:51.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Dec 23 13:53:51.153: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:54:07.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5235" for this suite. Dec 23 13:54:29.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:54:29.344: INFO: namespace init-container-5235 deletion completed in 22.259171289s • [SLOW TEST:38.263 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:54:29.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Dec 23 13:54:29.451: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix900421106/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:54:29.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1247" for this suite. Dec 23 13:54:35.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:54:35.784: INFO: namespace kubectl-1247 deletion completed in 6.192385757s • [SLOW TEST:6.439 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:54:35.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Dec 23 13:54:35.979: INFO: Waiting up to 5m0s for pod "pod-dc50ea0b-f7bd-4ad1-b228-c192d613363d" in namespace "emptydir-1240" to be "success or failure" Dec 23 13:54:36.008: INFO: Pod "pod-dc50ea0b-f7bd-4ad1-b228-c192d613363d": Phase="Pending", Reason="", readiness=false. Elapsed: 28.338489ms Dec 23 13:54:38.017: INFO: Pod "pod-dc50ea0b-f7bd-4ad1-b228-c192d613363d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037331913s Dec 23 13:54:40.025: INFO: Pod "pod-dc50ea0b-f7bd-4ad1-b228-c192d613363d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04579735s Dec 23 13:54:42.036: INFO: Pod "pod-dc50ea0b-f7bd-4ad1-b228-c192d613363d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056397497s Dec 23 13:54:44.045: INFO: Pod "pod-dc50ea0b-f7bd-4ad1-b228-c192d613363d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065946125s Dec 23 13:54:46.054: INFO: Pod "pod-dc50ea0b-f7bd-4ad1-b228-c192d613363d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.075093919s STEP: Saw pod success Dec 23 13:54:46.055: INFO: Pod "pod-dc50ea0b-f7bd-4ad1-b228-c192d613363d" satisfied condition "success or failure" Dec 23 13:54:46.058: INFO: Trying to get logs from node iruya-node pod pod-dc50ea0b-f7bd-4ad1-b228-c192d613363d container test-container: STEP: delete the pod Dec 23 13:54:46.190: INFO: Waiting for pod pod-dc50ea0b-f7bd-4ad1-b228-c192d613363d to disappear Dec 23 13:54:46.194: INFO: Pod pod-dc50ea0b-f7bd-4ad1-b228-c192d613363d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:54:46.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1240" for this suite. Dec 23 13:54:52.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:54:52.344: INFO: namespace emptydir-1240 deletion completed in 6.144477745s • [SLOW TEST:16.560 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:54:52.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-1196 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1196 to expose endpoints map[] Dec 23 13:54:52.537: INFO: successfully validated that service endpoint-test2 in namespace services-1196 exposes endpoints map[] (22.173869ms elapsed) STEP: Creating pod pod1 in namespace services-1196 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1196 to expose endpoints map[pod1:[80]] Dec 23 13:54:56.757: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.19106646s elapsed, will retry) Dec 23 13:54:59.804: INFO: successfully validated that service endpoint-test2 in namespace services-1196 exposes endpoints map[pod1:[80]] (7.238206234s elapsed) STEP: Creating pod pod2 in namespace services-1196 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1196 to expose endpoints map[pod1:[80] pod2:[80]] Dec 23 13:55:04.253: INFO: Unexpected endpoints: found map[f0af629a-0582-4781-9400-08665619bed3:[80]], expected map[pod1:[80] pod2:[80]] (4.429175015s elapsed, will retry) Dec 23 13:55:07.306: INFO: successfully validated that service endpoint-test2 in namespace services-1196 exposes endpoints map[pod1:[80] pod2:[80]] (7.481850045s elapsed) STEP: Deleting pod pod1 in namespace services-1196 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1196 to expose endpoints map[pod2:[80]] Dec 23 13:55:08.371: INFO: successfully validated that service endpoint-test2 in namespace services-1196 exposes endpoints map[pod2:[80]] (1.054879448s elapsed) STEP: Deleting pod pod2 in namespace services-1196 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1196 to expose endpoints map[] Dec 23 13:55:08.448: INFO: successfully validated that service endpoint-test2 in namespace services-1196 exposes endpoints map[] (61.432595ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:55:08.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1196" for this suite. Dec 23 13:55:30.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:55:30.912: INFO: namespace services-1196 deletion completed in 22.297110218s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:38.568 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:55:30.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Dec 23 13:55:31.129: INFO: Waiting up to 5m0s for pod "pod-ab90baf0-e4e1-43f0-b56b-e289ee0a301b" in namespace "emptydir-3734" to be "success or failure" Dec 23 13:55:31.218: INFO: Pod "pod-ab90baf0-e4e1-43f0-b56b-e289ee0a301b": Phase="Pending", Reason="", readiness=false. Elapsed: 89.361786ms Dec 23 13:55:33.226: INFO: Pod "pod-ab90baf0-e4e1-43f0-b56b-e289ee0a301b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0968454s Dec 23 13:55:35.236: INFO: Pod "pod-ab90baf0-e4e1-43f0-b56b-e289ee0a301b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107384968s Dec 23 13:55:37.243: INFO: Pod "pod-ab90baf0-e4e1-43f0-b56b-e289ee0a301b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114001673s Dec 23 13:55:39.249: INFO: Pod "pod-ab90baf0-e4e1-43f0-b56b-e289ee0a301b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.120121141s STEP: Saw pod success Dec 23 13:55:39.249: INFO: Pod "pod-ab90baf0-e4e1-43f0-b56b-e289ee0a301b" satisfied condition "success or failure" Dec 23 13:55:39.252: INFO: Trying to get logs from node iruya-node pod pod-ab90baf0-e4e1-43f0-b56b-e289ee0a301b container test-container: STEP: delete the pod Dec 23 13:55:39.317: INFO: Waiting for pod pod-ab90baf0-e4e1-43f0-b56b-e289ee0a301b to disappear Dec 23 13:55:39.361: INFO: Pod pod-ab90baf0-e4e1-43f0-b56b-e289ee0a301b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:55:39.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3734" for this suite. Dec 23 13:55:45.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:55:45.604: INFO: namespace emptydir-3734 deletion completed in 6.236974409s • [SLOW TEST:14.691 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:55:45.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 23 13:55:45.670: INFO: Creating ReplicaSet my-hostname-basic-7f5f51c2-fcfe-4307-bfc3-2fd77e0b5961 Dec 23 13:55:45.739: INFO: Pod name my-hostname-basic-7f5f51c2-fcfe-4307-bfc3-2fd77e0b5961: Found 0 pods out of 1 Dec 23 13:55:50.750: INFO: Pod name my-hostname-basic-7f5f51c2-fcfe-4307-bfc3-2fd77e0b5961: Found 1 pods out of 1 Dec 23 13:55:50.750: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-7f5f51c2-fcfe-4307-bfc3-2fd77e0b5961" is running Dec 23 13:55:54.772: INFO: Pod "my-hostname-basic-7f5f51c2-fcfe-4307-bfc3-2fd77e0b5961-jqtnd" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-23 13:55:45 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-23 13:55:45 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-7f5f51c2-fcfe-4307-bfc3-2fd77e0b5961]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-23 13:55:45 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-7f5f51c2-fcfe-4307-bfc3-2fd77e0b5961]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-23 13:55:45 +0000 UTC Reason: Message:}]) Dec 23 13:55:54.773: INFO: Trying to dial the pod Dec 23 13:55:59.817: INFO: Controller my-hostname-basic-7f5f51c2-fcfe-4307-bfc3-2fd77e0b5961: Got expected result from replica 1 [my-hostname-basic-7f5f51c2-fcfe-4307-bfc3-2fd77e0b5961-jqtnd]: "my-hostname-basic-7f5f51c2-fcfe-4307-bfc3-2fd77e0b5961-jqtnd", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:55:59.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1800" for this suite. Dec 23 13:56:05.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:56:06.009: INFO: namespace replicaset-1800 deletion completed in 6.182167383s • [SLOW TEST:20.404 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:56:06.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 23 13:56:06.152: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e3b165f7-4000-4d48-bede-39167812ade4" in namespace "projected-8849" to be "success or failure" Dec 23 13:56:06.175: INFO: Pod "downwardapi-volume-e3b165f7-4000-4d48-bede-39167812ade4": Phase="Pending", Reason="", readiness=false. Elapsed: 23.207121ms Dec 23 13:56:08.189: INFO: Pod "downwardapi-volume-e3b165f7-4000-4d48-bede-39167812ade4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036290206s Dec 23 13:56:10.200: INFO: Pod "downwardapi-volume-e3b165f7-4000-4d48-bede-39167812ade4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047267475s Dec 23 13:56:12.242: INFO: Pod "downwardapi-volume-e3b165f7-4000-4d48-bede-39167812ade4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090187123s Dec 23 13:56:14.259: INFO: Pod "downwardapi-volume-e3b165f7-4000-4d48-bede-39167812ade4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.106560427s Dec 23 13:56:16.270: INFO: Pod "downwardapi-volume-e3b165f7-4000-4d48-bede-39167812ade4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.117600532s STEP: Saw pod success Dec 23 13:56:16.270: INFO: Pod "downwardapi-volume-e3b165f7-4000-4d48-bede-39167812ade4" satisfied condition "success or failure" Dec 23 13:56:16.274: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-e3b165f7-4000-4d48-bede-39167812ade4 container client-container: STEP: delete the pod Dec 23 13:56:16.328: INFO: Waiting for pod downwardapi-volume-e3b165f7-4000-4d48-bede-39167812ade4 to disappear Dec 23 13:56:16.344: INFO: Pod downwardapi-volume-e3b165f7-4000-4d48-bede-39167812ade4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:56:16.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8849" for this suite. Dec 23 13:56:22.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:56:22.558: INFO: namespace projected-8849 deletion completed in 6.200965896s • [SLOW TEST:16.548 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:56:22.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Dec 23 13:56:22.681: INFO: Waiting up to 5m0s for pod "client-containers-460b872e-e78a-449f-9c13-64e0d0ff71dd" in namespace "containers-9612" to be "success or failure" Dec 23 13:56:22.735: INFO: Pod "client-containers-460b872e-e78a-449f-9c13-64e0d0ff71dd": Phase="Pending", Reason="", readiness=false. Elapsed: 53.293197ms Dec 23 13:56:24.744: INFO: Pod "client-containers-460b872e-e78a-449f-9c13-64e0d0ff71dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062422233s Dec 23 13:56:26.765: INFO: Pod "client-containers-460b872e-e78a-449f-9c13-64e0d0ff71dd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083557186s Dec 23 13:56:28.775: INFO: Pod "client-containers-460b872e-e78a-449f-9c13-64e0d0ff71dd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093662771s Dec 23 13:56:30.794: INFO: Pod "client-containers-460b872e-e78a-449f-9c13-64e0d0ff71dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.113063245s STEP: Saw pod success Dec 23 13:56:30.794: INFO: Pod "client-containers-460b872e-e78a-449f-9c13-64e0d0ff71dd" satisfied condition "success or failure" Dec 23 13:56:30.814: INFO: Trying to get logs from node iruya-node pod client-containers-460b872e-e78a-449f-9c13-64e0d0ff71dd container test-container: STEP: delete the pod Dec 23 13:56:30.953: INFO: Waiting for pod client-containers-460b872e-e78a-449f-9c13-64e0d0ff71dd to disappear Dec 23 13:56:30.962: INFO: Pod client-containers-460b872e-e78a-449f-9c13-64e0d0ff71dd no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:56:30.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9612" for this suite. Dec 23 13:56:36.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:56:37.075: INFO: namespace containers-9612 deletion completed in 6.108340938s • [SLOW TEST:14.517 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:56:37.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Dec 23 13:56:37.136: INFO: Waiting up to 5m0s for pod "pod-7ccc48d2-da1e-4599-b326-e4164eda38c3" in namespace "emptydir-6476" to be "success or failure" Dec 23 13:56:37.142: INFO: Pod "pod-7ccc48d2-da1e-4599-b326-e4164eda38c3": Phase="Pending", Reason="", readiness=false. Elapsed: 5.57643ms Dec 23 13:56:39.156: INFO: Pod "pod-7ccc48d2-da1e-4599-b326-e4164eda38c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019942107s Dec 23 13:56:41.165: INFO: Pod "pod-7ccc48d2-da1e-4599-b326-e4164eda38c3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028465549s Dec 23 13:56:43.171: INFO: Pod "pod-7ccc48d2-da1e-4599-b326-e4164eda38c3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035167444s Dec 23 13:56:45.181: INFO: Pod "pod-7ccc48d2-da1e-4599-b326-e4164eda38c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.045057282s STEP: Saw pod success Dec 23 13:56:45.181: INFO: Pod "pod-7ccc48d2-da1e-4599-b326-e4164eda38c3" satisfied condition "success or failure" Dec 23 13:56:45.186: INFO: Trying to get logs from node iruya-node pod pod-7ccc48d2-da1e-4599-b326-e4164eda38c3 container test-container: STEP: delete the pod Dec 23 13:56:45.384: INFO: Waiting for pod pod-7ccc48d2-da1e-4599-b326-e4164eda38c3 to disappear Dec 23 13:56:45.420: INFO: Pod pod-7ccc48d2-da1e-4599-b326-e4164eda38c3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:56:45.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6476" for this suite. Dec 23 13:56:51.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:56:51.692: INFO: namespace emptydir-6476 deletion completed in 6.263395168s • [SLOW TEST:14.616 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:56:51.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-6e840516-f3bf-4e6d-8196-ccc3af23eca8 STEP: Creating a pod to test consume secrets Dec 23 13:56:51.978: INFO: Waiting up to 5m0s for pod "pod-secrets-1c2fe00a-9a22-48c9-9b35-f8ad24f131aa" in namespace "secrets-4670" to be "success or failure" Dec 23 13:56:52.009: INFO: Pod "pod-secrets-1c2fe00a-9a22-48c9-9b35-f8ad24f131aa": Phase="Pending", Reason="", readiness=false. Elapsed: 30.833375ms Dec 23 13:56:54.024: INFO: Pod "pod-secrets-1c2fe00a-9a22-48c9-9b35-f8ad24f131aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045229689s Dec 23 13:56:56.030: INFO: Pod "pod-secrets-1c2fe00a-9a22-48c9-9b35-f8ad24f131aa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050992388s Dec 23 13:56:58.038: INFO: Pod "pod-secrets-1c2fe00a-9a22-48c9-9b35-f8ad24f131aa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059246676s Dec 23 13:57:00.043: INFO: Pod "pod-secrets-1c2fe00a-9a22-48c9-9b35-f8ad24f131aa": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064088302s Dec 23 13:57:02.053: INFO: Pod "pod-secrets-1c2fe00a-9a22-48c9-9b35-f8ad24f131aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.074682223s STEP: Saw pod success Dec 23 13:57:02.054: INFO: Pod "pod-secrets-1c2fe00a-9a22-48c9-9b35-f8ad24f131aa" satisfied condition "success or failure" Dec 23 13:57:02.062: INFO: Trying to get logs from node iruya-node pod pod-secrets-1c2fe00a-9a22-48c9-9b35-f8ad24f131aa container secret-volume-test: STEP: delete the pod Dec 23 13:57:02.097: INFO: Waiting for pod pod-secrets-1c2fe00a-9a22-48c9-9b35-f8ad24f131aa to disappear Dec 23 13:57:02.106: INFO: Pod pod-secrets-1c2fe00a-9a22-48c9-9b35-f8ad24f131aa no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:57:02.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4670" for this suite. Dec 23 13:57:08.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:57:08.268: INFO: namespace secrets-4670 deletion completed in 6.156028565s • [SLOW TEST:16.573 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:57:08.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1184.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1184.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 23 13:57:20.476: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-1184/dns-test-4a113c68-2783-488d-8af0-d4a41d4bdbe9: the server could not find the requested resource (get pods dns-test-4a113c68-2783-488d-8af0-d4a41d4bdbe9) Dec 23 13:57:20.488: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-1184/dns-test-4a113c68-2783-488d-8af0-d4a41d4bdbe9: the server could not find the requested resource (get pods dns-test-4a113c68-2783-488d-8af0-d4a41d4bdbe9) Dec 23 13:57:20.499: INFO: Unable to read wheezy_udp@PodARecord from pod dns-1184/dns-test-4a113c68-2783-488d-8af0-d4a41d4bdbe9: the server could not find the requested resource (get pods dns-test-4a113c68-2783-488d-8af0-d4a41d4bdbe9) Dec 23 13:57:20.505: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-1184/dns-test-4a113c68-2783-488d-8af0-d4a41d4bdbe9: the server could not find the requested resource (get pods dns-test-4a113c68-2783-488d-8af0-d4a41d4bdbe9) Dec 23 13:57:20.513: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-1184/dns-test-4a113c68-2783-488d-8af0-d4a41d4bdbe9: the server could not find the requested resource (get pods dns-test-4a113c68-2783-488d-8af0-d4a41d4bdbe9) Dec 23 13:57:20.525: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-1184/dns-test-4a113c68-2783-488d-8af0-d4a41d4bdbe9: the server could not find the requested resource (get pods dns-test-4a113c68-2783-488d-8af0-d4a41d4bdbe9) Dec 23 13:57:20.586: INFO: Unable to read jessie_udp@PodARecord from pod dns-1184/dns-test-4a113c68-2783-488d-8af0-d4a41d4bdbe9: the server could not find the requested resource (get pods dns-test-4a113c68-2783-488d-8af0-d4a41d4bdbe9) Dec 23 13:57:20.596: INFO: Unable to read jessie_tcp@PodARecord from pod dns-1184/dns-test-4a113c68-2783-488d-8af0-d4a41d4bdbe9: the server could not find the requested resource (get pods dns-test-4a113c68-2783-488d-8af0-d4a41d4bdbe9) Dec 23 13:57:20.596: INFO: Lookups using dns-1184/dns-test-4a113c68-2783-488d-8af0-d4a41d4bdbe9 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord] Dec 23 13:57:25.659: INFO: DNS probes using dns-1184/dns-test-4a113c68-2783-488d-8af0-d4a41d4bdbe9 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:57:25.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1184" for this suite. Dec 23 13:57:31.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:57:32.126: INFO: namespace dns-1184 deletion completed in 6.386983594s • [SLOW TEST:23.855 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:57:32.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W1223 13:58:02.876722 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 23 13:58:02.877: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:58:02.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-306" for this suite. Dec 23 13:58:10.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:58:11.467: INFO: namespace gc-306 deletion completed in 8.583285564s • [SLOW TEST:39.340 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:58:11.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Dec 23 13:58:11.962: INFO: Waiting up to 5m0s for pod "pod-22b3f2cd-4e52-452e-9085-ef586d2f9a1e" in namespace "emptydir-3591" to be "success or failure" Dec 23 13:58:11.971: INFO: Pod "pod-22b3f2cd-4e52-452e-9085-ef586d2f9a1e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.395108ms Dec 23 13:58:14.009: INFO: Pod "pod-22b3f2cd-4e52-452e-9085-ef586d2f9a1e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047316328s Dec 23 13:58:16.018: INFO: Pod "pod-22b3f2cd-4e52-452e-9085-ef586d2f9a1e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055590206s Dec 23 13:58:18.033: INFO: Pod "pod-22b3f2cd-4e52-452e-9085-ef586d2f9a1e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070392014s Dec 23 13:58:20.060: INFO: Pod "pod-22b3f2cd-4e52-452e-9085-ef586d2f9a1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.097395929s STEP: Saw pod success Dec 23 13:58:20.060: INFO: Pod "pod-22b3f2cd-4e52-452e-9085-ef586d2f9a1e" satisfied condition "success or failure" Dec 23 13:58:20.071: INFO: Trying to get logs from node iruya-node pod pod-22b3f2cd-4e52-452e-9085-ef586d2f9a1e container test-container: STEP: delete the pod Dec 23 13:58:20.157: INFO: Waiting for pod pod-22b3f2cd-4e52-452e-9085-ef586d2f9a1e to disappear Dec 23 13:58:20.162: INFO: Pod pod-22b3f2cd-4e52-452e-9085-ef586d2f9a1e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:58:20.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3591" for this suite. Dec 23 13:58:26.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:58:26.349: INFO: namespace emptydir-3591 deletion completed in 6.175544257s • [SLOW TEST:14.880 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:58:26.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 23 13:58:26.536: INFO: Waiting up to 5m0s for pod "downwardapi-volume-05f2defb-bcda-4513-b6fe-004538bd8f6c" in namespace "projected-2614" to be "success or failure" Dec 23 13:58:26.662: INFO: Pod "downwardapi-volume-05f2defb-bcda-4513-b6fe-004538bd8f6c": Phase="Pending", Reason="", readiness=false. Elapsed: 125.350927ms Dec 23 13:58:28.673: INFO: Pod "downwardapi-volume-05f2defb-bcda-4513-b6fe-004538bd8f6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137085831s Dec 23 13:58:30.686: INFO: Pod "downwardapi-volume-05f2defb-bcda-4513-b6fe-004538bd8f6c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.149676168s Dec 23 13:58:32.694: INFO: Pod "downwardapi-volume-05f2defb-bcda-4513-b6fe-004538bd8f6c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.157099228s Dec 23 13:58:34.704: INFO: Pod "downwardapi-volume-05f2defb-bcda-4513-b6fe-004538bd8f6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.167910547s STEP: Saw pod success Dec 23 13:58:34.705: INFO: Pod "downwardapi-volume-05f2defb-bcda-4513-b6fe-004538bd8f6c" satisfied condition "success or failure" Dec 23 13:58:34.708: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-05f2defb-bcda-4513-b6fe-004538bd8f6c container client-container: STEP: delete the pod Dec 23 13:58:34.800: INFO: Waiting for pod downwardapi-volume-05f2defb-bcda-4513-b6fe-004538bd8f6c to disappear Dec 23 13:58:34.814: INFO: Pod downwardapi-volume-05f2defb-bcda-4513-b6fe-004538bd8f6c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 23 13:58:34.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2614" for this suite. Dec 23 13:58:40.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 23 13:58:40.999: INFO: namespace projected-2614 deletion completed in 6.1801059s • [SLOW TEST:14.650 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 23 13:58:41.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 23 13:58:41.139: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 18.612612ms)
Dec 23 13:58:41.147: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.089027ms)
Dec 23 13:58:41.151: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.594753ms)
Dec 23 13:58:41.162: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.404036ms)
Dec 23 13:58:41.166: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.650898ms)
Dec 23 13:58:41.175: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.460916ms)
Dec 23 13:58:41.184: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.51817ms)
Dec 23 13:58:41.214: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 30.019147ms)
Dec 23 13:58:41.219: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.599759ms)
Dec 23 13:58:41.226: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.586377ms)
Dec 23 13:58:41.233: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.089532ms)
Dec 23 13:58:41.241: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.108888ms)
Dec 23 13:58:41.247: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.771663ms)
Dec 23 13:58:41.252: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.15144ms)
Dec 23 13:58:41.269: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.736263ms)
Dec 23 13:58:41.275: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.526569ms)
Dec 23 13:58:41.280: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.923369ms)
Dec 23 13:58:41.284: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.308233ms)
Dec 23 13:58:41.288: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.086283ms)
Dec 23 13:58:41.293: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.114829ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 13:58:41.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4882" for this suite.
Dec 23 13:58:47.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 13:58:47.639: INFO: namespace proxy-4882 deletion completed in 6.341374932s

• [SLOW TEST:6.639 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 13:58:47.639: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 23 13:58:47.777: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e09a0ef4-43c4-4a6b-b966-cf97ea312ddb" in namespace "downward-api-7647" to be "success or failure"
Dec 23 13:58:47.784: INFO: Pod "downwardapi-volume-e09a0ef4-43c4-4a6b-b966-cf97ea312ddb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.554336ms
Dec 23 13:58:49.803: INFO: Pod "downwardapi-volume-e09a0ef4-43c4-4a6b-b966-cf97ea312ddb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025615977s
Dec 23 13:58:51.810: INFO: Pod "downwardapi-volume-e09a0ef4-43c4-4a6b-b966-cf97ea312ddb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033029064s
Dec 23 13:58:53.835: INFO: Pod "downwardapi-volume-e09a0ef4-43c4-4a6b-b966-cf97ea312ddb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057527448s
Dec 23 13:58:55.850: INFO: Pod "downwardapi-volume-e09a0ef4-43c4-4a6b-b966-cf97ea312ddb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.072835517s
STEP: Saw pod success
Dec 23 13:58:55.851: INFO: Pod "downwardapi-volume-e09a0ef4-43c4-4a6b-b966-cf97ea312ddb" satisfied condition "success or failure"
Dec 23 13:58:55.862: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-e09a0ef4-43c4-4a6b-b966-cf97ea312ddb container client-container: 
STEP: delete the pod
Dec 23 13:58:55.989: INFO: Waiting for pod downwardapi-volume-e09a0ef4-43c4-4a6b-b966-cf97ea312ddb to disappear
Dec 23 13:58:56.006: INFO: Pod downwardapi-volume-e09a0ef4-43c4-4a6b-b966-cf97ea312ddb no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 13:58:56.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7647" for this suite.
Dec 23 13:59:02.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 13:59:02.175: INFO: namespace downward-api-7647 deletion completed in 6.153794793s

• [SLOW TEST:14.536 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 13:59:02.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-c4361ca6-b871-42dc-82ec-1396d7edc3dd in namespace container-probe-5473
Dec 23 13:59:10.378: INFO: Started pod test-webserver-c4361ca6-b871-42dc-82ec-1396d7edc3dd in namespace container-probe-5473
STEP: checking the pod's current state and verifying that restartCount is present
Dec 23 13:59:10.385: INFO: Initial restart count of pod test-webserver-c4361ca6-b871-42dc-82ec-1396d7edc3dd is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:03:12.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5473" for this suite.
Dec 23 14:03:18.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:03:18.871: INFO: namespace container-probe-5473 deletion completed in 6.441284574s

• [SLOW TEST:256.696 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:03:18.872: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-2358/secret-test-798cb473-01ec-4592-915e-e2e1219acb97
STEP: Creating a pod to test consume secrets
Dec 23 14:03:19.038: INFO: Waiting up to 5m0s for pod "pod-configmaps-af597771-e18d-4205-81bc-98d18c9e73a3" in namespace "secrets-2358" to be "success or failure"
Dec 23 14:03:19.093: INFO: Pod "pod-configmaps-af597771-e18d-4205-81bc-98d18c9e73a3": Phase="Pending", Reason="", readiness=false. Elapsed: 54.856862ms
Dec 23 14:03:21.106: INFO: Pod "pod-configmaps-af597771-e18d-4205-81bc-98d18c9e73a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068010732s
Dec 23 14:03:23.117: INFO: Pod "pod-configmaps-af597771-e18d-4205-81bc-98d18c9e73a3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079109773s
Dec 23 14:03:25.132: INFO: Pod "pod-configmaps-af597771-e18d-4205-81bc-98d18c9e73a3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093730209s
Dec 23 14:03:27.151: INFO: Pod "pod-configmaps-af597771-e18d-4205-81bc-98d18c9e73a3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.112651146s
Dec 23 14:03:29.160: INFO: Pod "pod-configmaps-af597771-e18d-4205-81bc-98d18c9e73a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.121905074s
STEP: Saw pod success
Dec 23 14:03:29.160: INFO: Pod "pod-configmaps-af597771-e18d-4205-81bc-98d18c9e73a3" satisfied condition "success or failure"
Dec 23 14:03:29.164: INFO: Trying to get logs from node iruya-node pod pod-configmaps-af597771-e18d-4205-81bc-98d18c9e73a3 container env-test: 
STEP: delete the pod
Dec 23 14:03:29.219: INFO: Waiting for pod pod-configmaps-af597771-e18d-4205-81bc-98d18c9e73a3 to disappear
Dec 23 14:03:29.225: INFO: Pod pod-configmaps-af597771-e18d-4205-81bc-98d18c9e73a3 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:03:29.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2358" for this suite.
Dec 23 14:03:35.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:03:35.484: INFO: namespace secrets-2358 deletion completed in 6.251279714s

• [SLOW TEST:16.612 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:03:35.486: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-9334
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 23 14:03:35.536: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 23 14:04:07.865: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-9334 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 14:04:07.865: INFO: >>> kubeConfig: /root/.kube/config
Dec 23 14:04:08.262: INFO: Waiting for endpoints: map[]
Dec 23 14:04:08.274: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-9334 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 14:04:08.275: INFO: >>> kubeConfig: /root/.kube/config
Dec 23 14:04:08.674: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:04:08.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9334" for this suite.
Dec 23 14:04:31.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:04:31.426: INFO: namespace pod-network-test-9334 deletion completed in 22.738524056s

• [SLOW TEST:55.941 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:04:31.426: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Dec 23 14:04:31.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3793'
Dec 23 14:04:33.636: INFO: stderr: ""
Dec 23 14:04:33.637: INFO: stdout: "pod/pause created\n"
Dec 23 14:04:33.637: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Dec 23 14:04:33.637: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-3793" to be "running and ready"
Dec 23 14:04:33.654: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 16.836664ms
Dec 23 14:04:35.664: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026855616s
Dec 23 14:04:37.672: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034735455s
Dec 23 14:04:39.698: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060683968s
Dec 23 14:04:41.705: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.067706787s
Dec 23 14:04:41.705: INFO: Pod "pause" satisfied condition "running and ready"
Dec 23 14:04:41.705: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Dec 23 14:04:41.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-3793'
Dec 23 14:04:41.898: INFO: stderr: ""
Dec 23 14:04:41.898: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Dec 23 14:04:41.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3793'
Dec 23 14:04:41.998: INFO: stderr: ""
Dec 23 14:04:41.998: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          8s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Dec 23 14:04:41.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-3793'
Dec 23 14:04:42.163: INFO: stderr: ""
Dec 23 14:04:42.163: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Dec 23 14:04:42.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3793'
Dec 23 14:04:42.315: INFO: stderr: ""
Dec 23 14:04:42.315: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Dec 23 14:04:42.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3793'
Dec 23 14:04:42.483: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 23 14:04:42.483: INFO: stdout: "pod \"pause\" force deleted\n"
Dec 23 14:04:42.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-3793'
Dec 23 14:04:42.666: INFO: stderr: "No resources found.\n"
Dec 23 14:04:42.667: INFO: stdout: ""
Dec 23 14:04:42.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-3793 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 23 14:04:42.845: INFO: stderr: ""
Dec 23 14:04:42.845: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:04:42.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3793" for this suite.
Dec 23 14:04:49.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:04:49.468: INFO: namespace kubectl-3793 deletion completed in 6.598147096s

• [SLOW TEST:18.042 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:04:49.471: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 23 14:04:49.616: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a945caf9-8135-49bc-9801-9585af2b0c1a" in namespace "projected-2993" to be "success or failure"
Dec 23 14:04:49.626: INFO: Pod "downwardapi-volume-a945caf9-8135-49bc-9801-9585af2b0c1a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.383993ms
Dec 23 14:04:51.636: INFO: Pod "downwardapi-volume-a945caf9-8135-49bc-9801-9585af2b0c1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019986674s
Dec 23 14:04:53.742: INFO: Pod "downwardapi-volume-a945caf9-8135-49bc-9801-9585af2b0c1a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.125931654s
Dec 23 14:04:55.841: INFO: Pod "downwardapi-volume-a945caf9-8135-49bc-9801-9585af2b0c1a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.224507599s
Dec 23 14:04:57.862: INFO: Pod "downwardapi-volume-a945caf9-8135-49bc-9801-9585af2b0c1a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.245367462s
Dec 23 14:04:59.876: INFO: Pod "downwardapi-volume-a945caf9-8135-49bc-9801-9585af2b0c1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.259696444s
STEP: Saw pod success
Dec 23 14:04:59.876: INFO: Pod "downwardapi-volume-a945caf9-8135-49bc-9801-9585af2b0c1a" satisfied condition "success or failure"
Dec 23 14:04:59.889: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a945caf9-8135-49bc-9801-9585af2b0c1a container client-container: 
STEP: delete the pod
Dec 23 14:05:00.104: INFO: Waiting for pod downwardapi-volume-a945caf9-8135-49bc-9801-9585af2b0c1a to disappear
Dec 23 14:05:00.116: INFO: Pod downwardapi-volume-a945caf9-8135-49bc-9801-9585af2b0c1a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:05:00.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2993" for this suite.
Dec 23 14:05:06.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:05:06.365: INFO: namespace projected-2993 deletion completed in 6.242608085s

• [SLOW TEST:16.895 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:05:06.367: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-6zjv
STEP: Creating a pod to test atomic-volume-subpath
Dec 23 14:05:06.662: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-6zjv" in namespace "subpath-8195" to be "success or failure"
Dec 23 14:05:06.702: INFO: Pod "pod-subpath-test-projected-6zjv": Phase="Pending", Reason="", readiness=false. Elapsed: 39.631477ms
Dec 23 14:05:08.715: INFO: Pod "pod-subpath-test-projected-6zjv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053307809s
Dec 23 14:05:10.814: INFO: Pod "pod-subpath-test-projected-6zjv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.152010059s
Dec 23 14:05:12.824: INFO: Pod "pod-subpath-test-projected-6zjv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.162081218s
Dec 23 14:05:14.843: INFO: Pod "pod-subpath-test-projected-6zjv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.181245511s
Dec 23 14:05:16.864: INFO: Pod "pod-subpath-test-projected-6zjv": Phase="Running", Reason="", readiness=true. Elapsed: 10.201867593s
Dec 23 14:05:18.887: INFO: Pod "pod-subpath-test-projected-6zjv": Phase="Running", Reason="", readiness=true. Elapsed: 12.225208515s
Dec 23 14:05:20.897: INFO: Pod "pod-subpath-test-projected-6zjv": Phase="Running", Reason="", readiness=true. Elapsed: 14.23512719s
Dec 23 14:05:22.907: INFO: Pod "pod-subpath-test-projected-6zjv": Phase="Running", Reason="", readiness=true. Elapsed: 16.245139116s
Dec 23 14:05:24.915: INFO: Pod "pod-subpath-test-projected-6zjv": Phase="Running", Reason="", readiness=true. Elapsed: 18.25284226s
Dec 23 14:05:26.923: INFO: Pod "pod-subpath-test-projected-6zjv": Phase="Running", Reason="", readiness=true. Elapsed: 20.260398071s
Dec 23 14:05:29.462: INFO: Pod "pod-subpath-test-projected-6zjv": Phase="Running", Reason="", readiness=true. Elapsed: 22.79949434s
Dec 23 14:05:31.473: INFO: Pod "pod-subpath-test-projected-6zjv": Phase="Running", Reason="", readiness=true. Elapsed: 24.811301197s
Dec 23 14:05:33.483: INFO: Pod "pod-subpath-test-projected-6zjv": Phase="Running", Reason="", readiness=true. Elapsed: 26.820907506s
Dec 23 14:05:35.495: INFO: Pod "pod-subpath-test-projected-6zjv": Phase="Running", Reason="", readiness=true. Elapsed: 28.833067922s
Dec 23 14:05:37.507: INFO: Pod "pod-subpath-test-projected-6zjv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.845381353s
STEP: Saw pod success
Dec 23 14:05:37.508: INFO: Pod "pod-subpath-test-projected-6zjv" satisfied condition "success or failure"
Dec 23 14:05:37.512: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-6zjv container test-container-subpath-projected-6zjv: 
STEP: delete the pod
Dec 23 14:05:37.754: INFO: Waiting for pod pod-subpath-test-projected-6zjv to disappear
Dec 23 14:05:37.762: INFO: Pod pod-subpath-test-projected-6zjv no longer exists
STEP: Deleting pod pod-subpath-test-projected-6zjv
Dec 23 14:05:37.762: INFO: Deleting pod "pod-subpath-test-projected-6zjv" in namespace "subpath-8195"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:05:37.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8195" for this suite.
Dec 23 14:05:43.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:05:44.029: INFO: namespace subpath-8195 deletion completed in 6.243083442s

• [SLOW TEST:37.662 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:05:44.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Dec 23 14:05:52.769: INFO: Successfully updated pod "annotationupdate35d5c7ef-7a46-4308-a381-a0281eaf2fed"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:05:54.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6436" for this suite.
Dec 23 14:06:32.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:06:33.042: INFO: namespace downward-api-6436 deletion completed in 38.191209102s

• [SLOW TEST:49.012 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:06:33.043: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Dec 23 14:06:41.206: INFO: Pod pod-hostip-16a55663-f9c4-435d-bfe8-2fd9787113a4 has hostIP: 10.96.3.65
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:06:41.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2884" for this suite.
Dec 23 14:07:03.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:07:03.471: INFO: namespace pods-2884 deletion completed in 22.257587928s

• [SLOW TEST:30.428 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:07:03.471: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-976f201f-d346-4f79-b777-c6e0354c7284
STEP: Creating a pod to test consume secrets
Dec 23 14:07:03.650: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9006edd0-51d2-4e29-bf54-d6097dab1ff7" in namespace "projected-5123" to be "success or failure"
Dec 23 14:07:03.666: INFO: Pod "pod-projected-secrets-9006edd0-51d2-4e29-bf54-d6097dab1ff7": Phase="Pending", Reason="", readiness=false. Elapsed: 15.854457ms
Dec 23 14:07:05.675: INFO: Pod "pod-projected-secrets-9006edd0-51d2-4e29-bf54-d6097dab1ff7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025383758s
Dec 23 14:07:07.683: INFO: Pod "pod-projected-secrets-9006edd0-51d2-4e29-bf54-d6097dab1ff7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033756215s
Dec 23 14:07:10.017: INFO: Pod "pod-projected-secrets-9006edd0-51d2-4e29-bf54-d6097dab1ff7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.367558142s
Dec 23 14:07:12.029: INFO: Pod "pod-projected-secrets-9006edd0-51d2-4e29-bf54-d6097dab1ff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.378827299s
STEP: Saw pod success
Dec 23 14:07:12.029: INFO: Pod "pod-projected-secrets-9006edd0-51d2-4e29-bf54-d6097dab1ff7" satisfied condition "success or failure"
Dec 23 14:07:12.033: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-9006edd0-51d2-4e29-bf54-d6097dab1ff7 container projected-secret-volume-test: 
STEP: delete the pod
Dec 23 14:07:12.115: INFO: Waiting for pod pod-projected-secrets-9006edd0-51d2-4e29-bf54-d6097dab1ff7 to disappear
Dec 23 14:07:12.324: INFO: Pod pod-projected-secrets-9006edd0-51d2-4e29-bf54-d6097dab1ff7 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:07:12.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5123" for this suite.
Dec 23 14:07:18.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:07:18.590: INFO: namespace projected-5123 deletion completed in 6.257300074s

• [SLOW TEST:15.119 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:07:18.592: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-2994233e-b75e-4430-8803-6090afe9cf28
STEP: Creating a pod to test consume configMaps
Dec 23 14:07:18.735: INFO: Waiting up to 5m0s for pod "pod-configmaps-845042ab-b845-4b3f-b0c7-76a7815cd313" in namespace "configmap-2082" to be "success or failure"
Dec 23 14:07:18.768: INFO: Pod "pod-configmaps-845042ab-b845-4b3f-b0c7-76a7815cd313": Phase="Pending", Reason="", readiness=false. Elapsed: 32.316111ms
Dec 23 14:07:20.787: INFO: Pod "pod-configmaps-845042ab-b845-4b3f-b0c7-76a7815cd313": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051608747s
Dec 23 14:07:22.853: INFO: Pod "pod-configmaps-845042ab-b845-4b3f-b0c7-76a7815cd313": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117110843s
Dec 23 14:07:24.874: INFO: Pod "pod-configmaps-845042ab-b845-4b3f-b0c7-76a7815cd313": Phase="Pending", Reason="", readiness=false. Elapsed: 6.138395342s
Dec 23 14:07:26.916: INFO: Pod "pod-configmaps-845042ab-b845-4b3f-b0c7-76a7815cd313": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.179842999s
STEP: Saw pod success
Dec 23 14:07:26.916: INFO: Pod "pod-configmaps-845042ab-b845-4b3f-b0c7-76a7815cd313" satisfied condition "success or failure"
Dec 23 14:07:26.919: INFO: Trying to get logs from node iruya-node pod pod-configmaps-845042ab-b845-4b3f-b0c7-76a7815cd313 container configmap-volume-test: 
STEP: delete the pod
Dec 23 14:07:26.993: INFO: Waiting for pod pod-configmaps-845042ab-b845-4b3f-b0c7-76a7815cd313 to disappear
Dec 23 14:07:26.999: INFO: Pod pod-configmaps-845042ab-b845-4b3f-b0c7-76a7815cd313 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:07:26.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2082" for this suite.
Dec 23 14:07:33.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:07:33.212: INFO: namespace configmap-2082 deletion completed in 6.210166611s

• [SLOW TEST:14.620 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:07:33.213: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Dec 23 14:07:33.294: INFO: Waiting up to 5m0s for pod "pod-1e1acbd9-d4a1-43fd-ae2f-1a62b4bde88b" in namespace "emptydir-9028" to be "success or failure"
Dec 23 14:07:33.352: INFO: Pod "pod-1e1acbd9-d4a1-43fd-ae2f-1a62b4bde88b": Phase="Pending", Reason="", readiness=false. Elapsed: 58.342571ms
Dec 23 14:07:35.362: INFO: Pod "pod-1e1acbd9-d4a1-43fd-ae2f-1a62b4bde88b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067778804s
Dec 23 14:07:37.368: INFO: Pod "pod-1e1acbd9-d4a1-43fd-ae2f-1a62b4bde88b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074115591s
Dec 23 14:07:39.378: INFO: Pod "pod-1e1acbd9-d4a1-43fd-ae2f-1a62b4bde88b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083898925s
Dec 23 14:07:41.648: INFO: Pod "pod-1e1acbd9-d4a1-43fd-ae2f-1a62b4bde88b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.353765195s
Dec 23 14:07:43.660: INFO: Pod "pod-1e1acbd9-d4a1-43fd-ae2f-1a62b4bde88b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.366224418s
STEP: Saw pod success
Dec 23 14:07:43.660: INFO: Pod "pod-1e1acbd9-d4a1-43fd-ae2f-1a62b4bde88b" satisfied condition "success or failure"
Dec 23 14:07:43.664: INFO: Trying to get logs from node iruya-node pod pod-1e1acbd9-d4a1-43fd-ae2f-1a62b4bde88b container test-container: 
STEP: delete the pod
Dec 23 14:07:43.741: INFO: Waiting for pod pod-1e1acbd9-d4a1-43fd-ae2f-1a62b4bde88b to disappear
Dec 23 14:07:43.748: INFO: Pod pod-1e1acbd9-d4a1-43fd-ae2f-1a62b4bde88b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:07:43.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9028" for this suite.
Dec 23 14:07:49.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:07:49.935: INFO: namespace emptydir-9028 deletion completed in 6.180000395s

• [SLOW TEST:16.722 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:07:49.936: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-6761985e-bae2-493b-aab4-0090b2d64138
STEP: Creating a pod to test consume configMaps
Dec 23 14:07:50.090: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0ebccd4b-3244-47f8-b04f-3b35f7c97d62" in namespace "projected-3428" to be "success or failure"
Dec 23 14:07:50.180: INFO: Pod "pod-projected-configmaps-0ebccd4b-3244-47f8-b04f-3b35f7c97d62": Phase="Pending", Reason="", readiness=false. Elapsed: 89.326374ms
Dec 23 14:07:52.190: INFO: Pod "pod-projected-configmaps-0ebccd4b-3244-47f8-b04f-3b35f7c97d62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099603461s
Dec 23 14:07:54.200: INFO: Pod "pod-projected-configmaps-0ebccd4b-3244-47f8-b04f-3b35f7c97d62": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109944971s
Dec 23 14:07:56.208: INFO: Pod "pod-projected-configmaps-0ebccd4b-3244-47f8-b04f-3b35f7c97d62": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11806791s
Dec 23 14:07:58.219: INFO: Pod "pod-projected-configmaps-0ebccd4b-3244-47f8-b04f-3b35f7c97d62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.128624234s
STEP: Saw pod success
Dec 23 14:07:58.219: INFO: Pod "pod-projected-configmaps-0ebccd4b-3244-47f8-b04f-3b35f7c97d62" satisfied condition "success or failure"
Dec 23 14:07:58.230: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-0ebccd4b-3244-47f8-b04f-3b35f7c97d62 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 23 14:07:58.291: INFO: Waiting for pod pod-projected-configmaps-0ebccd4b-3244-47f8-b04f-3b35f7c97d62 to disappear
Dec 23 14:07:58.298: INFO: Pod pod-projected-configmaps-0ebccd4b-3244-47f8-b04f-3b35f7c97d62 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:07:58.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3428" for this suite.
Dec 23 14:08:04.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:08:04.492: INFO: namespace projected-3428 deletion completed in 6.185578227s

• [SLOW TEST:14.556 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:08:04.492: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-2b131578-b318-4db0-b5f4-8717f49c646a
STEP: Creating a pod to test consume secrets
Dec 23 14:08:04.651: INFO: Waiting up to 5m0s for pod "pod-secrets-b993272b-387e-45fa-8c85-d9b3b2c4d31f" in namespace "secrets-9069" to be "success or failure"
Dec 23 14:08:04.688: INFO: Pod "pod-secrets-b993272b-387e-45fa-8c85-d9b3b2c4d31f": Phase="Pending", Reason="", readiness=false. Elapsed: 36.752623ms
Dec 23 14:08:06.697: INFO: Pod "pod-secrets-b993272b-387e-45fa-8c85-d9b3b2c4d31f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046128706s
Dec 23 14:08:08.709: INFO: Pod "pod-secrets-b993272b-387e-45fa-8c85-d9b3b2c4d31f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057824216s
Dec 23 14:08:10.726: INFO: Pod "pod-secrets-b993272b-387e-45fa-8c85-d9b3b2c4d31f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074840334s
Dec 23 14:08:12.740: INFO: Pod "pod-secrets-b993272b-387e-45fa-8c85-d9b3b2c4d31f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.089546744s
STEP: Saw pod success
Dec 23 14:08:12.741: INFO: Pod "pod-secrets-b993272b-387e-45fa-8c85-d9b3b2c4d31f" satisfied condition "success or failure"
Dec 23 14:08:12.748: INFO: Trying to get logs from node iruya-node pod pod-secrets-b993272b-387e-45fa-8c85-d9b3b2c4d31f container secret-volume-test: 
STEP: delete the pod
Dec 23 14:08:12.940: INFO: Waiting for pod pod-secrets-b993272b-387e-45fa-8c85-d9b3b2c4d31f to disappear
Dec 23 14:08:12.951: INFO: Pod pod-secrets-b993272b-387e-45fa-8c85-d9b3b2c4d31f no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:08:12.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9069" for this suite.
Dec 23 14:08:19.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:08:19.134: INFO: namespace secrets-9069 deletion completed in 6.174804626s

• [SLOW TEST:14.641 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:08:19.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Dec 23 14:08:19.251: INFO: namespace kubectl-6624
Dec 23 14:08:19.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6624'
Dec 23 14:08:19.632: INFO: stderr: ""
Dec 23 14:08:19.632: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 23 14:08:20.655: INFO: Selector matched 1 pods for map[app:redis]
Dec 23 14:08:20.656: INFO: Found 0 / 1
Dec 23 14:08:21.646: INFO: Selector matched 1 pods for map[app:redis]
Dec 23 14:08:21.646: INFO: Found 0 / 1
Dec 23 14:08:22.652: INFO: Selector matched 1 pods for map[app:redis]
Dec 23 14:08:22.652: INFO: Found 0 / 1
Dec 23 14:08:23.665: INFO: Selector matched 1 pods for map[app:redis]
Dec 23 14:08:23.665: INFO: Found 0 / 1
Dec 23 14:08:24.647: INFO: Selector matched 1 pods for map[app:redis]
Dec 23 14:08:24.648: INFO: Found 0 / 1
Dec 23 14:08:25.652: INFO: Selector matched 1 pods for map[app:redis]
Dec 23 14:08:25.653: INFO: Found 0 / 1
Dec 23 14:08:26.642: INFO: Selector matched 1 pods for map[app:redis]
Dec 23 14:08:26.642: INFO: Found 0 / 1
Dec 23 14:08:27.651: INFO: Selector matched 1 pods for map[app:redis]
Dec 23 14:08:27.651: INFO: Found 1 / 1
Dec 23 14:08:27.651: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 23 14:08:27.658: INFO: Selector matched 1 pods for map[app:redis]
Dec 23 14:08:27.658: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 23 14:08:27.658: INFO: wait on redis-master startup in kubectl-6624 
Dec 23 14:08:27.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-2f7bd redis-master --namespace=kubectl-6624'
Dec 23 14:08:27.949: INFO: stderr: ""
Dec 23 14:08:27.949: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 23 Dec 14:08:25.748 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 23 Dec 14:08:25.749 # Server started, Redis version 3.2.12\n1:M 23 Dec 14:08:25.749 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 23 Dec 14:08:25.749 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Dec 23 14:08:27.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-6624'
Dec 23 14:08:28.168: INFO: stderr: ""
Dec 23 14:08:28.168: INFO: stdout: "service/rm2 exposed\n"
Dec 23 14:08:28.174: INFO: Service rm2 in namespace kubectl-6624 found.
STEP: exposing service
Dec 23 14:08:30.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-6624'
Dec 23 14:08:30.443: INFO: stderr: ""
Dec 23 14:08:30.443: INFO: stdout: "service/rm3 exposed\n"
Dec 23 14:08:30.516: INFO: Service rm3 in namespace kubectl-6624 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:08:32.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6624" for this suite.
Dec 23 14:08:56.592: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:08:56.709: INFO: namespace kubectl-6624 deletion completed in 24.157273091s

• [SLOW TEST:37.574 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:08:56.710: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 23 14:08:56.912: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Dec 23 14:08:56.927: INFO: Number of nodes with available pods: 0
Dec 23 14:08:56.927: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Dec 23 14:08:57.102: INFO: Number of nodes with available pods: 0
Dec 23 14:08:57.102: INFO: Node iruya-node is running more than one daemon pod
Dec 23 14:08:58.289: INFO: Number of nodes with available pods: 0
Dec 23 14:08:58.290: INFO: Node iruya-node is running more than one daemon pod
Dec 23 14:08:59.119: INFO: Number of nodes with available pods: 0
Dec 23 14:08:59.120: INFO: Node iruya-node is running more than one daemon pod
Dec 23 14:09:00.112: INFO: Number of nodes with available pods: 0
Dec 23 14:09:00.113: INFO: Node iruya-node is running more than one daemon pod
Dec 23 14:09:01.116: INFO: Number of nodes with available pods: 0
Dec 23 14:09:01.116: INFO: Node iruya-node is running more than one daemon pod
Dec 23 14:09:02.115: INFO: Number of nodes with available pods: 0
Dec 23 14:09:02.115: INFO: Node iruya-node is running more than one daemon pod
Dec 23 14:09:03.135: INFO: Number of nodes with available pods: 0
Dec 23 14:09:03.135: INFO: Node iruya-node is running more than one daemon pod
Dec 23 14:09:04.113: INFO: Number of nodes with available pods: 0
Dec 23 14:09:04.113: INFO: Node iruya-node is running more than one daemon pod
Dec 23 14:09:05.136: INFO: Number of nodes with available pods: 1
Dec 23 14:09:05.136: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Dec 23 14:09:05.186: INFO: Number of nodes with available pods: 1
Dec 23 14:09:05.186: INFO: Number of running nodes: 0, number of available pods: 1
Dec 23 14:09:06.198: INFO: Number of nodes with available pods: 0
Dec 23 14:09:06.198: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Dec 23 14:09:06.229: INFO: Number of nodes with available pods: 0
Dec 23 14:09:06.229: INFO: Node iruya-node is running more than one daemon pod
Dec 23 14:09:07.241: INFO: Number of nodes with available pods: 0
Dec 23 14:09:07.241: INFO: Node iruya-node is running more than one daemon pod
Dec 23 14:09:08.242: INFO: Number of nodes with available pods: 0
Dec 23 14:09:08.242: INFO: Node iruya-node is running more than one daemon pod
Dec 23 14:09:09.243: INFO: Number of nodes with available pods: 0
Dec 23 14:09:09.243: INFO: Node iruya-node is running more than one daemon pod
Dec 23 14:09:10.244: INFO: Number of nodes with available pods: 0
Dec 23 14:09:10.244: INFO: Node iruya-node is running more than one daemon pod
Dec 23 14:09:11.245: INFO: Number of nodes with available pods: 0
Dec 23 14:09:11.246: INFO: Node iruya-node is running more than one daemon pod
Dec 23 14:09:12.244: INFO: Number of nodes with available pods: 0
Dec 23 14:09:12.244: INFO: Node iruya-node is running more than one daemon pod
Dec 23 14:09:13.271: INFO: Number of nodes with available pods: 0
Dec 23 14:09:13.271: INFO: Node iruya-node is running more than one daemon pod
Dec 23 14:09:14.243: INFO: Number of nodes with available pods: 0
Dec 23 14:09:14.243: INFO: Node iruya-node is running more than one daemon pod
Dec 23 14:09:15.240: INFO: Number of nodes with available pods: 0
Dec 23 14:09:15.240: INFO: Node iruya-node is running more than one daemon pod
Dec 23 14:09:16.239: INFO: Number of nodes with available pods: 0
Dec 23 14:09:16.239: INFO: Node iruya-node is running more than one daemon pod
Dec 23 14:09:17.239: INFO: Number of nodes with available pods: 0
Dec 23 14:09:17.240: INFO: Node iruya-node is running more than one daemon pod
Dec 23 14:09:18.238: INFO: Number of nodes with available pods: 0
Dec 23 14:09:18.238: INFO: Node iruya-node is running more than one daemon pod
Dec 23 14:09:19.247: INFO: Number of nodes with available pods: 0
Dec 23 14:09:19.247: INFO: Node iruya-node is running more than one daemon pod
Dec 23 14:09:20.244: INFO: Number of nodes with available pods: 0
Dec 23 14:09:20.244: INFO: Node iruya-node is running more than one daemon pod
Dec 23 14:09:21.254: INFO: Number of nodes with available pods: 0
Dec 23 14:09:21.254: INFO: Node iruya-node is running more than one daemon pod
Dec 23 14:09:22.247: INFO: Number of nodes with available pods: 0
Dec 23 14:09:22.247: INFO: Node iruya-node is running more than one daemon pod
Dec 23 14:09:23.248: INFO: Number of nodes with available pods: 0
Dec 23 14:09:23.248: INFO: Node iruya-node is running more than one daemon pod
Dec 23 14:09:24.244: INFO: Number of nodes with available pods: 1
Dec 23 14:09:24.244: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4611, will wait for the garbage collector to delete the pods
Dec 23 14:09:24.329: INFO: Deleting DaemonSet.extensions daemon-set took: 14.488992ms
Dec 23 14:09:25.030: INFO: Terminating DaemonSet.extensions daemon-set pods took: 700.94901ms
Dec 23 14:09:30.836: INFO: Number of nodes with available pods: 0
Dec 23 14:09:30.836: INFO: Number of running nodes: 0, number of available pods: 0
Dec 23 14:09:30.839: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4611/daemonsets","resourceVersion":"17771649"},"items":null}

Dec 23 14:09:30.841: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4611/pods","resourceVersion":"17771649"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:09:30.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4611" for this suite.
Dec 23 14:09:36.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:09:37.060: INFO: namespace daemonsets-4611 deletion completed in 6.120042072s

• [SLOW TEST:40.350 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:09:37.061: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Dec 23 14:09:38.117: INFO: Pod name wrapped-volume-race-4e519793-ec8e-4c7b-87e9-7b4aaa75a425: Found 0 pods out of 5
Dec 23 14:09:43.136: INFO: Pod name wrapped-volume-race-4e519793-ec8e-4c7b-87e9-7b4aaa75a425: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-4e519793-ec8e-4c7b-87e9-7b4aaa75a425 in namespace emptydir-wrapper-8676, will wait for the garbage collector to delete the pods
Dec 23 14:10:13.280: INFO: Deleting ReplicationController wrapped-volume-race-4e519793-ec8e-4c7b-87e9-7b4aaa75a425 took: 26.229293ms
Dec 23 14:10:13.781: INFO: Terminating ReplicationController wrapped-volume-race-4e519793-ec8e-4c7b-87e9-7b4aaa75a425 pods took: 501.650337ms
STEP: Creating RC which spawns configmap-volume pods
Dec 23 14:10:57.347: INFO: Pod name wrapped-volume-race-a5821987-ab0d-4b7f-87f8-c7289c47140d: Found 0 pods out of 5
Dec 23 14:11:02.368: INFO: Pod name wrapped-volume-race-a5821987-ab0d-4b7f-87f8-c7289c47140d: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-a5821987-ab0d-4b7f-87f8-c7289c47140d in namespace emptydir-wrapper-8676, will wait for the garbage collector to delete the pods
Dec 23 14:11:32.472: INFO: Deleting ReplicationController wrapped-volume-race-a5821987-ab0d-4b7f-87f8-c7289c47140d took: 15.528034ms
Dec 23 14:11:34.473: INFO: Terminating ReplicationController wrapped-volume-race-a5821987-ab0d-4b7f-87f8-c7289c47140d pods took: 2.001221473s
STEP: Creating RC which spawns configmap-volume pods
Dec 23 14:12:27.227: INFO: Pod name wrapped-volume-race-fa74e924-af77-4fe4-95a5-424724bf100a: Found 0 pods out of 5
Dec 23 14:12:32.303: INFO: Pod name wrapped-volume-race-fa74e924-af77-4fe4-95a5-424724bf100a: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-fa74e924-af77-4fe4-95a5-424724bf100a in namespace emptydir-wrapper-8676, will wait for the garbage collector to delete the pods
Dec 23 14:13:04.439: INFO: Deleting ReplicationController wrapped-volume-race-fa74e924-af77-4fe4-95a5-424724bf100a took: 22.935041ms
Dec 23 14:13:04.841: INFO: Terminating ReplicationController wrapped-volume-race-fa74e924-af77-4fe4-95a5-424724bf100a pods took: 401.554396ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:13:47.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-8676" for this suite.
Dec 23 14:13:57.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:13:57.928: INFO: namespace emptydir-wrapper-8676 deletion completed in 10.197727323s

• [SLOW TEST:260.867 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:13:57.929: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-9aff7443-b5b4-4000-b23f-075988d78b49
STEP: Creating a pod to test consume secrets
Dec 23 14:13:58.060: INFO: Waiting up to 5m0s for pod "pod-secrets-4a76aba8-e254-4c58-a233-0d2d42291349" in namespace "secrets-3634" to be "success or failure"
Dec 23 14:13:58.088: INFO: Pod "pod-secrets-4a76aba8-e254-4c58-a233-0d2d42291349": Phase="Pending", Reason="", readiness=false. Elapsed: 28.289986ms
Dec 23 14:14:00.101: INFO: Pod "pod-secrets-4a76aba8-e254-4c58-a233-0d2d42291349": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04140993s
Dec 23 14:14:02.110: INFO: Pod "pod-secrets-4a76aba8-e254-4c58-a233-0d2d42291349": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050059688s
Dec 23 14:14:04.124: INFO: Pod "pod-secrets-4a76aba8-e254-4c58-a233-0d2d42291349": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063555973s
Dec 23 14:14:06.138: INFO: Pod "pod-secrets-4a76aba8-e254-4c58-a233-0d2d42291349": Phase="Pending", Reason="", readiness=false. Elapsed: 8.078487564s
Dec 23 14:14:08.150: INFO: Pod "pod-secrets-4a76aba8-e254-4c58-a233-0d2d42291349": Phase="Pending", Reason="", readiness=false. Elapsed: 10.089667672s
Dec 23 14:14:10.162: INFO: Pod "pod-secrets-4a76aba8-e254-4c58-a233-0d2d42291349": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.102457514s
STEP: Saw pod success
Dec 23 14:14:10.163: INFO: Pod "pod-secrets-4a76aba8-e254-4c58-a233-0d2d42291349" satisfied condition "success or failure"
Dec 23 14:14:10.167: INFO: Trying to get logs from node iruya-node pod pod-secrets-4a76aba8-e254-4c58-a233-0d2d42291349 container secret-volume-test: 
STEP: delete the pod
Dec 23 14:14:10.225: INFO: Waiting for pod pod-secrets-4a76aba8-e254-4c58-a233-0d2d42291349 to disappear
Dec 23 14:14:10.231: INFO: Pod pod-secrets-4a76aba8-e254-4c58-a233-0d2d42291349 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:14:10.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3634" for this suite.
Dec 23 14:14:16.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:14:16.455: INFO: namespace secrets-3634 deletion completed in 6.214041168s

• [SLOW TEST:18.526 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:14:16.457: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-bcb07b71-b257-457c-86e6-3b22e8d48676
STEP: Creating a pod to test consume secrets
Dec 23 14:14:16.615: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ea942399-583f-49cc-851f-c83934832205" in namespace "projected-2241" to be "success or failure"
Dec 23 14:14:16.623: INFO: Pod "pod-projected-secrets-ea942399-583f-49cc-851f-c83934832205": Phase="Pending", Reason="", readiness=false. Elapsed: 7.156557ms
Dec 23 14:14:18.637: INFO: Pod "pod-projected-secrets-ea942399-583f-49cc-851f-c83934832205": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021545339s
Dec 23 14:14:20.649: INFO: Pod "pod-projected-secrets-ea942399-583f-49cc-851f-c83934832205": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03321689s
Dec 23 14:14:22.670: INFO: Pod "pod-projected-secrets-ea942399-583f-49cc-851f-c83934832205": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05437368s
Dec 23 14:14:24.678: INFO: Pod "pod-projected-secrets-ea942399-583f-49cc-851f-c83934832205": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.062092238s
STEP: Saw pod success
Dec 23 14:14:24.678: INFO: Pod "pod-projected-secrets-ea942399-583f-49cc-851f-c83934832205" satisfied condition "success or failure"
Dec 23 14:14:24.683: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-ea942399-583f-49cc-851f-c83934832205 container projected-secret-volume-test: 
STEP: delete the pod
Dec 23 14:14:24.736: INFO: Waiting for pod pod-projected-secrets-ea942399-583f-49cc-851f-c83934832205 to disappear
Dec 23 14:14:24.740: INFO: Pod pod-projected-secrets-ea942399-583f-49cc-851f-c83934832205 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:14:24.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2241" for this suite.
Dec 23 14:14:30.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:14:30.924: INFO: namespace projected-2241 deletion completed in 6.17762983s

• [SLOW TEST:14.467 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:14:30.925: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:15:31.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9413" for this suite.
Dec 23 14:15:53.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:15:53.703: INFO: namespace container-probe-9413 deletion completed in 22.184598471s

• [SLOW TEST:82.779 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:15:53.704: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Dec 23 14:15:53.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8887 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Dec 23 14:16:04.494: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n"
Dec 23 14:16:04.494: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:16:06.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8887" for this suite.
Dec 23 14:16:12.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:16:12.651: INFO: namespace kubectl-8887 deletion completed in 6.129846785s

• [SLOW TEST:18.947 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:16:12.651: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-196
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-196
STEP: Creating statefulset with conflicting port in namespace statefulset-196
STEP: Waiting until pod test-pod will start running in namespace statefulset-196
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-196
Dec 23 14:16:22.963: INFO: Observed stateful pod in namespace: statefulset-196, name: ss-0, uid: b3bfe958-b7a3-4a07-a484-7ee795ff639c, status phase: Pending. Waiting for statefulset controller to delete.
Dec 23 14:16:26.525: INFO: Observed stateful pod in namespace: statefulset-196, name: ss-0, uid: b3bfe958-b7a3-4a07-a484-7ee795ff639c, status phase: Failed. Waiting for statefulset controller to delete.
Dec 23 14:16:26.636: INFO: Observed stateful pod in namespace: statefulset-196, name: ss-0, uid: b3bfe958-b7a3-4a07-a484-7ee795ff639c, status phase: Failed. Waiting for statefulset controller to delete.
Dec 23 14:16:26.702: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-196
STEP: Removing pod with conflicting port in namespace statefulset-196
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-196 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 23 14:16:38.951: INFO: Deleting all statefulset in ns statefulset-196
Dec 23 14:16:38.956: INFO: Scaling statefulset ss to 0
Dec 23 14:16:48.988: INFO: Waiting for statefulset status.replicas updated to 0
Dec 23 14:16:49.001: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:16:49.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-196" for this suite.
Dec 23 14:16:55.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:16:55.206: INFO: namespace statefulset-196 deletion completed in 6.158808889s

• [SLOW TEST:42.555 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:16:55.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-ac2ae25e-d96a-4a79-94ba-598dd44d33ef
STEP: Creating a pod to test consume secrets
Dec 23 14:16:55.298: INFO: Waiting up to 5m0s for pod "pod-secrets-53a322d5-d9e7-497a-a452-e225952da9e2" in namespace "secrets-5912" to be "success or failure"
Dec 23 14:16:55.305: INFO: Pod "pod-secrets-53a322d5-d9e7-497a-a452-e225952da9e2": Phase="Pending", Reason="", readiness=false. Elapsed: 7.254973ms
Dec 23 14:16:57.318: INFO: Pod "pod-secrets-53a322d5-d9e7-497a-a452-e225952da9e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020136024s
Dec 23 14:16:59.329: INFO: Pod "pod-secrets-53a322d5-d9e7-497a-a452-e225952da9e2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030919438s
Dec 23 14:17:01.339: INFO: Pod "pod-secrets-53a322d5-d9e7-497a-a452-e225952da9e2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041529227s
Dec 23 14:17:03.351: INFO: Pod "pod-secrets-53a322d5-d9e7-497a-a452-e225952da9e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.053117845s
STEP: Saw pod success
Dec 23 14:17:03.351: INFO: Pod "pod-secrets-53a322d5-d9e7-497a-a452-e225952da9e2" satisfied condition "success or failure"
Dec 23 14:17:03.355: INFO: Trying to get logs from node iruya-node pod pod-secrets-53a322d5-d9e7-497a-a452-e225952da9e2 container secret-env-test: 
STEP: delete the pod
Dec 23 14:17:03.408: INFO: Waiting for pod pod-secrets-53a322d5-d9e7-497a-a452-e225952da9e2 to disappear
Dec 23 14:17:03.425: INFO: Pod pod-secrets-53a322d5-d9e7-497a-a452-e225952da9e2 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:17:03.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5912" for this suite.
Dec 23 14:17:09.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:17:09.566: INFO: namespace secrets-5912 deletion completed in 6.135467846s

• [SLOW TEST:14.360 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:17:09.567: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Dec 23 14:17:09.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9688'
Dec 23 14:17:10.216: INFO: stderr: ""
Dec 23 14:17:10.217: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 23 14:17:10.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9688'
Dec 23 14:17:10.688: INFO: stderr: ""
Dec 23 14:17:10.688: INFO: stdout: "update-demo-nautilus-55q24 update-demo-nautilus-gds8g "
Dec 23 14:17:10.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-55q24 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9688'
Dec 23 14:17:10.819: INFO: stderr: ""
Dec 23 14:17:10.819: INFO: stdout: ""
Dec 23 14:17:10.819: INFO: update-demo-nautilus-55q24 is created but not running
Dec 23 14:17:15.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9688'
Dec 23 14:17:17.421: INFO: stderr: ""
Dec 23 14:17:17.421: INFO: stdout: "update-demo-nautilus-55q24 update-demo-nautilus-gds8g "
Dec 23 14:17:17.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-55q24 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9688'
Dec 23 14:17:17.809: INFO: stderr: ""
Dec 23 14:17:17.809: INFO: stdout: ""
Dec 23 14:17:17.809: INFO: update-demo-nautilus-55q24 is created but not running
Dec 23 14:17:22.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9688'
Dec 23 14:17:23.032: INFO: stderr: ""
Dec 23 14:17:23.032: INFO: stdout: "update-demo-nautilus-55q24 update-demo-nautilus-gds8g "
Dec 23 14:17:23.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-55q24 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9688'
Dec 23 14:17:23.176: INFO: stderr: ""
Dec 23 14:17:23.176: INFO: stdout: "true"
Dec 23 14:17:23.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-55q24 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9688'
Dec 23 14:17:23.263: INFO: stderr: ""
Dec 23 14:17:23.263: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 23 14:17:23.263: INFO: validating pod update-demo-nautilus-55q24
Dec 23 14:17:23.271: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 23 14:17:23.271: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 23 14:17:23.271: INFO: update-demo-nautilus-55q24 is verified up and running
Dec 23 14:17:23.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gds8g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9688'
Dec 23 14:17:23.372: INFO: stderr: ""
Dec 23 14:17:23.372: INFO: stdout: "true"
Dec 23 14:17:23.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gds8g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9688'
Dec 23 14:17:23.467: INFO: stderr: ""
Dec 23 14:17:23.468: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 23 14:17:23.468: INFO: validating pod update-demo-nautilus-gds8g
Dec 23 14:17:23.550: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 23 14:17:23.551: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 23 14:17:23.551: INFO: update-demo-nautilus-gds8g is verified up and running
STEP: scaling down the replication controller
Dec 23 14:17:23.554: INFO: scanned /root for discovery docs: 
Dec 23 14:17:23.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-9688'
Dec 23 14:17:25.000: INFO: stderr: ""
Dec 23 14:17:25.000: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 23 14:17:25.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9688'
Dec 23 14:17:25.203: INFO: stderr: ""
Dec 23 14:17:25.203: INFO: stdout: "update-demo-nautilus-55q24 update-demo-nautilus-gds8g "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 23 14:17:30.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9688'
Dec 23 14:17:30.383: INFO: stderr: ""
Dec 23 14:17:30.383: INFO: stdout: "update-demo-nautilus-55q24 update-demo-nautilus-gds8g "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 23 14:17:35.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9688'
Dec 23 14:17:35.569: INFO: stderr: ""
Dec 23 14:17:35.570: INFO: stdout: "update-demo-nautilus-55q24 update-demo-nautilus-gds8g "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 23 14:17:40.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9688'
Dec 23 14:17:40.853: INFO: stderr: ""
Dec 23 14:17:40.854: INFO: stdout: "update-demo-nautilus-gds8g "
Dec 23 14:17:40.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gds8g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9688'
Dec 23 14:17:40.984: INFO: stderr: ""
Dec 23 14:17:40.984: INFO: stdout: "true"
Dec 23 14:17:40.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gds8g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9688'
Dec 23 14:17:41.118: INFO: stderr: ""
Dec 23 14:17:41.119: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 23 14:17:41.119: INFO: validating pod update-demo-nautilus-gds8g
Dec 23 14:17:41.134: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 23 14:17:41.134: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 23 14:17:41.134: INFO: update-demo-nautilus-gds8g is verified up and running
STEP: scaling up the replication controller
Dec 23 14:17:41.139: INFO: scanned /root for discovery docs: 
Dec 23 14:17:41.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-9688'
Dec 23 14:17:42.432: INFO: stderr: ""
Dec 23 14:17:42.433: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 23 14:17:42.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9688'
Dec 23 14:17:42.612: INFO: stderr: ""
Dec 23 14:17:42.612: INFO: stdout: "update-demo-nautilus-gds8g update-demo-nautilus-x7njc "
Dec 23 14:17:42.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gds8g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9688'
Dec 23 14:17:42.766: INFO: stderr: ""
Dec 23 14:17:42.766: INFO: stdout: "true"
Dec 23 14:17:42.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gds8g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9688'
Dec 23 14:17:42.940: INFO: stderr: ""
Dec 23 14:17:42.940: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 23 14:17:42.940: INFO: validating pod update-demo-nautilus-gds8g
Dec 23 14:17:42.962: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 23 14:17:42.962: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 23 14:17:42.962: INFO: update-demo-nautilus-gds8g is verified up and running
Dec 23 14:17:42.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x7njc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9688'
Dec 23 14:17:43.102: INFO: stderr: ""
Dec 23 14:17:43.102: INFO: stdout: ""
Dec 23 14:17:43.102: INFO: update-demo-nautilus-x7njc is created but not running
Dec 23 14:17:48.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9688'
Dec 23 14:17:48.327: INFO: stderr: ""
Dec 23 14:17:48.328: INFO: stdout: "update-demo-nautilus-gds8g update-demo-nautilus-x7njc "
Dec 23 14:17:48.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gds8g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9688'
Dec 23 14:17:48.503: INFO: stderr: ""
Dec 23 14:17:48.503: INFO: stdout: "true"
Dec 23 14:17:48.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gds8g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9688'
Dec 23 14:17:48.654: INFO: stderr: ""
Dec 23 14:17:48.655: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 23 14:17:48.655: INFO: validating pod update-demo-nautilus-gds8g
Dec 23 14:17:48.667: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 23 14:17:48.667: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 23 14:17:48.667: INFO: update-demo-nautilus-gds8g is verified up and running
Dec 23 14:17:48.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x7njc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9688'
Dec 23 14:17:48.776: INFO: stderr: ""
Dec 23 14:17:48.776: INFO: stdout: "true"
Dec 23 14:17:48.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x7njc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9688'
Dec 23 14:17:48.963: INFO: stderr: ""
Dec 23 14:17:48.963: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 23 14:17:48.963: INFO: validating pod update-demo-nautilus-x7njc
Dec 23 14:17:48.972: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 23 14:17:48.972: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 23 14:17:48.972: INFO: update-demo-nautilus-x7njc is verified up and running
STEP: using delete to clean up resources
Dec 23 14:17:48.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9688'
Dec 23 14:17:49.135: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 23 14:17:49.135: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 23 14:17:49.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9688'
Dec 23 14:17:49.312: INFO: stderr: "No resources found.\n"
Dec 23 14:17:49.312: INFO: stdout: ""
Dec 23 14:17:49.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9688 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 23 14:17:49.490: INFO: stderr: ""
Dec 23 14:17:49.490: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:17:49.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9688" for this suite.
Dec 23 14:18:11.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:18:11.668: INFO: namespace kubectl-9688 deletion completed in 22.165915304s

• [SLOW TEST:62.102 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:18:11.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 23 14:18:11.805: INFO: Waiting up to 5m0s for pod "pod-41bf8c87-b903-4a54-886d-8408603138ac" in namespace "emptydir-5273" to be "success or failure"
Dec 23 14:18:11.813: INFO: Pod "pod-41bf8c87-b903-4a54-886d-8408603138ac": Phase="Pending", Reason="", readiness=false. Elapsed: 7.828774ms
Dec 23 14:18:13.827: INFO: Pod "pod-41bf8c87-b903-4a54-886d-8408603138ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022371803s
Dec 23 14:18:15.839: INFO: Pod "pod-41bf8c87-b903-4a54-886d-8408603138ac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034034659s
Dec 23 14:18:17.853: INFO: Pod "pod-41bf8c87-b903-4a54-886d-8408603138ac": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04791825s
Dec 23 14:18:19.869: INFO: Pod "pod-41bf8c87-b903-4a54-886d-8408603138ac": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063451845s
Dec 23 14:18:21.917: INFO: Pod "pod-41bf8c87-b903-4a54-886d-8408603138ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.112015521s
STEP: Saw pod success
Dec 23 14:18:21.918: INFO: Pod "pod-41bf8c87-b903-4a54-886d-8408603138ac" satisfied condition "success or failure"
Dec 23 14:18:21.934: INFO: Trying to get logs from node iruya-node pod pod-41bf8c87-b903-4a54-886d-8408603138ac container test-container: 
STEP: delete the pod
Dec 23 14:18:22.188: INFO: Waiting for pod pod-41bf8c87-b903-4a54-886d-8408603138ac to disappear
Dec 23 14:18:22.196: INFO: Pod pod-41bf8c87-b903-4a54-886d-8408603138ac no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:18:22.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5273" for this suite.
Dec 23 14:18:28.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:18:28.432: INFO: namespace emptydir-5273 deletion completed in 6.229733958s

• [SLOW TEST:16.763 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:18:28.432: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 23 14:18:28.584: INFO: Pod name rollover-pod: Found 0 pods out of 1
Dec 23 14:18:33.599: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 23 14:18:37.618: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Dec 23 14:18:39.630: INFO: Creating deployment "test-rollover-deployment"
Dec 23 14:18:39.650: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Dec 23 14:18:41.670: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Dec 23 14:18:41.684: INFO: Ensure that both replica sets have 1 created replica
Dec 23 14:18:41.692: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Dec 23 14:18:41.708: INFO: Updating deployment test-rollover-deployment
Dec 23 14:18:41.709: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Dec 23 14:18:43.739: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Dec 23 14:18:43.756: INFO: Make sure deployment "test-rollover-deployment" is complete
Dec 23 14:18:43.768: INFO: all replica sets need to contain the pod-template-hash label
Dec 23 14:18:43.768: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712707519, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712707519, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712707522, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712707519, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 23 14:18:45.792: INFO: all replica sets need to contain the pod-template-hash label
Dec 23 14:18:45.793: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712707519, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712707519, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712707522, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712707519, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 23 14:18:48.163: INFO: all replica sets need to contain the pod-template-hash label
Dec 23 14:18:48.163: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712707519, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712707519, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712707522, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712707519, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 23 14:18:49.787: INFO: all replica sets need to contain the pod-template-hash label
Dec 23 14:18:49.788: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712707519, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712707519, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712707522, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712707519, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 23 14:18:51.840: INFO: all replica sets need to contain the pod-template-hash label
Dec 23 14:18:51.840: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712707519, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712707519, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712707522, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712707519, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 23 14:18:53.798: INFO: all replica sets need to contain the pod-template-hash label
Dec 23 14:18:53.798: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712707519, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712707519, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712707531, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712707519, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 23 14:18:55.791: INFO: all replica sets need to contain the pod-template-hash label
Dec 23 14:18:55.791: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712707519, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712707519, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712707531, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712707519, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 23 14:18:57.789: INFO: all replica sets need to contain the pod-template-hash label
Dec 23 14:18:57.789: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712707519, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712707519, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712707531, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712707519, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 23 14:18:59.786: INFO: all replica sets need to contain the pod-template-hash label
Dec 23 14:18:59.787: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712707519, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712707519, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712707531, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712707519, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 23 14:19:01.796: INFO: all replica sets need to contain the pod-template-hash label
Dec 23 14:19:01.797: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712707519, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712707519, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712707531, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712707519, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 23 14:19:03.795: INFO: 
Dec 23 14:19:03.795: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 23 14:19:03.806: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-60,SelfLink:/apis/apps/v1/namespaces/deployment-60/deployments/test-rollover-deployment,UID:50989512-fa3f-44a7-b717-e4387f73a404,ResourceVersion:17773711,Generation:2,CreationTimestamp:2019-12-23 14:18:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-23 14:18:39 +0000 UTC 2019-12-23 14:18:39 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-23 14:19:01 +0000 UTC 2019-12-23 14:18:39 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Dec 23 14:19:03.816: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-60,SelfLink:/apis/apps/v1/namespaces/deployment-60/replicasets/test-rollover-deployment-854595fc44,UID:43707fd0-31e8-4781-aca4-62cf7022ea84,ResourceVersion:17773700,Generation:2,CreationTimestamp:2019-12-23 14:18:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 50989512-fa3f-44a7-b717-e4387f73a404 0xc0022f9207 0xc0022f9208}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 23 14:19:03.816: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Dec 23 14:19:03.816: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-60,SelfLink:/apis/apps/v1/namespaces/deployment-60/replicasets/test-rollover-controller,UID:c147efb2-52e7-46df-8533-3f0954818386,ResourceVersion:17773709,Generation:2,CreationTimestamp:2019-12-23 14:18:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 50989512-fa3f-44a7-b717-e4387f73a404 0xc0022f9137 0xc0022f9138}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 23 14:19:03.817: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-60,SelfLink:/apis/apps/v1/namespaces/deployment-60/replicasets/test-rollover-deployment-9b8b997cf,UID:bab9c1f3-ebbe-4cd5-962d-be0606994b74,ResourceVersion:17773663,Generation:2,CreationTimestamp:2019-12-23 14:18:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 50989512-fa3f-44a7-b717-e4387f73a404 0xc0022f92e0 0xc0022f92e1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 23 14:19:03.829: INFO: Pod "test-rollover-deployment-854595fc44-tnspm" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-tnspm,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-60,SelfLink:/api/v1/namespaces/deployment-60/pods/test-rollover-deployment-854595fc44-tnspm,UID:b5254e0f-62b4-4630-b87e-5e10ca6dda85,ResourceVersion:17773683,Generation:0,CreationTimestamp:2019-12-23 14:18:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 43707fd0-31e8-4781-aca4-62cf7022ea84 0xc0022f9ed7 0xc0022f9ed8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2qnrg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2qnrg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-2qnrg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022f9f50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022f9f70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:18:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:18:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:18:51 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:18:41 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2019-12-23 14:18:42 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-23 14:18:50 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://74f7948289abc28deef44909b4934d27b798fc91fb8dd46273fc3fb2cdf78f9c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:19:03.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-60" for this suite.
Dec 23 14:19:11.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:19:12.055: INFO: namespace deployment-60 deletion completed in 8.218274073s

• [SLOW TEST:43.623 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:19:12.056: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Dec 23 14:19:12.388: INFO: Number of nodes with available pods: 0
Dec 23 14:19:12.389: INFO: Node iruya-node is running more than one daemon pod
Dec 23 14:19:14.295: INFO: Number of nodes with available pods: 0
Dec 23 14:19:14.296: INFO: Node iruya-node is running more than one daemon pod
Dec 23 14:19:14.831: INFO: Number of nodes with available pods: 0
Dec 23 14:19:14.831: INFO: Node iruya-node is running more than one daemon pod
Dec 23 14:19:15.418: INFO: Number of nodes with available pods: 0
Dec 23 14:19:15.418: INFO: Node iruya-node is running more than one daemon pod
Dec 23 14:19:16.437: INFO: Number of nodes with available pods: 0
Dec 23 14:19:16.437: INFO: Node iruya-node is running more than one daemon pod
Dec 23 14:19:17.401: INFO: Number of nodes with available pods: 0
Dec 23 14:19:17.401: INFO: Node iruya-node is running more than one daemon pod
Dec 23 14:19:19.154: INFO: Number of nodes with available pods: 0
Dec 23 14:19:19.154: INFO: Node iruya-node is running more than one daemon pod
Dec 23 14:19:19.644: INFO: Number of nodes with available pods: 0
Dec 23 14:19:19.644: INFO: Node iruya-node is running more than one daemon pod
Dec 23 14:19:20.411: INFO: Number of nodes with available pods: 0
Dec 23 14:19:20.412: INFO: Node iruya-node is running more than one daemon pod
Dec 23 14:19:21.412: INFO: Number of nodes with available pods: 0
Dec 23 14:19:21.412: INFO: Node iruya-node is running more than one daemon pod
Dec 23 14:19:22.453: INFO: Number of nodes with available pods: 0
Dec 23 14:19:22.453: INFO: Node iruya-node is running more than one daemon pod
Dec 23 14:19:23.407: INFO: Number of nodes with available pods: 2
Dec 23 14:19:23.407: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Dec 23 14:19:23.487: INFO: Number of nodes with available pods: 2
Dec 23 14:19:23.487: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2168, will wait for the garbage collector to delete the pods
Dec 23 14:19:24.612: INFO: Deleting DaemonSet.extensions daemon-set took: 13.799764ms
Dec 23 14:19:24.713: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.466408ms
Dec 23 14:19:37.919: INFO: Number of nodes with available pods: 0
Dec 23 14:19:37.919: INFO: Number of running nodes: 0, number of available pods: 0
Dec 23 14:19:37.923: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2168/daemonsets","resourceVersion":"17773841"},"items":null}

Dec 23 14:19:37.927: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2168/pods","resourceVersion":"17773841"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:19:37.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2168" for this suite.
Dec 23 14:19:43.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:19:44.066: INFO: namespace daemonsets-2168 deletion completed in 6.119949833s

• [SLOW TEST:32.010 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:19:44.069: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 23 14:19:44.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-7774'
Dec 23 14:19:44.333: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 23 14:19:44.333: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Dec 23 14:19:44.342: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Dec 23 14:19:44.355: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Dec 23 14:19:44.392: INFO: scanned /root for discovery docs: 
Dec 23 14:19:44.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-7774'
Dec 23 14:20:07.969: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 23 14:20:07.970: INFO: stdout: "Created e2e-test-nginx-rc-e503a522b19aff923b2d66949e8d801c\nScaling up e2e-test-nginx-rc-e503a522b19aff923b2d66949e8d801c from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-e503a522b19aff923b2d66949e8d801c up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-e503a522b19aff923b2d66949e8d801c to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Dec 23 14:20:07.970: INFO: stdout: "Created e2e-test-nginx-rc-e503a522b19aff923b2d66949e8d801c\nScaling up e2e-test-nginx-rc-e503a522b19aff923b2d66949e8d801c from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-e503a522b19aff923b2d66949e8d801c up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-e503a522b19aff923b2d66949e8d801c to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Dec 23 14:20:07.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-7774'
Dec 23 14:20:08.128: INFO: stderr: ""
Dec 23 14:20:08.128: INFO: stdout: "e2e-test-nginx-rc-e503a522b19aff923b2d66949e8d801c-v7gt7 "
Dec 23 14:20:08.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-e503a522b19aff923b2d66949e8d801c-v7gt7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7774'
Dec 23 14:20:08.252: INFO: stderr: ""
Dec 23 14:20:08.252: INFO: stdout: "true"
Dec 23 14:20:08.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-e503a522b19aff923b2d66949e8d801c-v7gt7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7774'
Dec 23 14:20:08.358: INFO: stderr: ""
Dec 23 14:20:08.358: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Dec 23 14:20:08.359: INFO: e2e-test-nginx-rc-e503a522b19aff923b2d66949e8d801c-v7gt7 is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Dec 23 14:20:08.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-7774'
Dec 23 14:20:08.486: INFO: stderr: ""
Dec 23 14:20:08.487: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:20:08.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7774" for this suite.
Dec 23 14:20:30.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:20:30.667: INFO: namespace kubectl-7774 deletion completed in 22.153591287s

• [SLOW TEST:46.598 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:20:30.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Dec 23 14:20:38.866: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-e44848a7-b893-4ac4-ab72-b194134fa97e,GenerateName:,Namespace:events-3436,SelfLink:/api/v1/namespaces/events-3436/pods/send-events-e44848a7-b893-4ac4-ab72-b194134fa97e,UID:4c7195fb-8f61-48f2-910b-f60a5ec1a645,ResourceVersion:17774038,Generation:0,CreationTimestamp:2019-12-23 14:20:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 756606986,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2vvrl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2vvrl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-2vvrl true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000c7f630} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000c7f650}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:20:30 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:20:38 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:20:38 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:20:30 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2019-12-23 14:20:30 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2019-12-23 14:20:38 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://a1e24d50a543462527f7ada51e78d1ca29df93fea00b106534d486a05c04c829}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Dec 23 14:20:40.894: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Dec 23 14:20:42.913: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:20:42.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-3436" for this suite.
Dec 23 14:21:29.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:21:29.180: INFO: namespace events-3436 deletion completed in 46.207131258s

• [SLOW TEST:58.511 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:21:29.180: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 23 14:21:29.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-2717'
Dec 23 14:21:29.393: INFO: stderr: ""
Dec 23 14:21:29.393: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Dec 23 14:21:29.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-2717'
Dec 23 14:21:33.866: INFO: stderr: ""
Dec 23 14:21:33.866: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:21:33.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2717" for this suite.
Dec 23 14:21:39.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:21:40.041: INFO: namespace kubectl-2717 deletion completed in 6.149957373s

• [SLOW TEST:10.861 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:21:40.044: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 23 14:21:40.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-9425'
Dec 23 14:21:40.277: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 23 14:21:40.277: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Dec 23 14:21:42.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-9425'
Dec 23 14:21:42.484: INFO: stderr: ""
Dec 23 14:21:42.484: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:21:42.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9425" for this suite.
Dec 23 14:21:48.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:21:48.816: INFO: namespace kubectl-9425 deletion completed in 6.320331874s

• [SLOW TEST:8.772 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:21:48.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Dec 23 14:21:48.963: INFO: Waiting up to 5m0s for pod "client-containers-8913ef14-0131-4d43-a26b-baac7e88e286" in namespace "containers-4303" to be "success or failure"
Dec 23 14:21:48.981: INFO: Pod "client-containers-8913ef14-0131-4d43-a26b-baac7e88e286": Phase="Pending", Reason="", readiness=false. Elapsed: 17.782988ms
Dec 23 14:21:50.989: INFO: Pod "client-containers-8913ef14-0131-4d43-a26b-baac7e88e286": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02513036s
Dec 23 14:21:52.999: INFO: Pod "client-containers-8913ef14-0131-4d43-a26b-baac7e88e286": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035235057s
Dec 23 14:21:55.024: INFO: Pod "client-containers-8913ef14-0131-4d43-a26b-baac7e88e286": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06024816s
Dec 23 14:21:57.047: INFO: Pod "client-containers-8913ef14-0131-4d43-a26b-baac7e88e286": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.083137327s
STEP: Saw pod success
Dec 23 14:21:57.047: INFO: Pod "client-containers-8913ef14-0131-4d43-a26b-baac7e88e286" satisfied condition "success or failure"
Dec 23 14:21:57.054: INFO: Trying to get logs from node iruya-node pod client-containers-8913ef14-0131-4d43-a26b-baac7e88e286 container test-container: 
STEP: delete the pod
Dec 23 14:21:57.194: INFO: Waiting for pod client-containers-8913ef14-0131-4d43-a26b-baac7e88e286 to disappear
Dec 23 14:21:57.209: INFO: Pod client-containers-8913ef14-0131-4d43-a26b-baac7e88e286 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:21:57.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4303" for this suite.
Dec 23 14:22:03.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:22:03.392: INFO: namespace containers-4303 deletion completed in 6.172065766s

• [SLOW TEST:14.576 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:22:03.393: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Dec 23 14:22:03.932: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 23 14:22:03.996: INFO: Waiting for terminating namespaces to be deleted...
Dec 23 14:22:04.000: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Dec 23 14:22:04.024: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Dec 23 14:22:04.025: INFO: 	Container weave ready: true, restart count 0
Dec 23 14:22:04.025: INFO: 	Container weave-npc ready: true, restart count 0
Dec 23 14:22:04.025: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Dec 23 14:22:04.025: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 23 14:22:04.025: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Dec 23 14:22:04.055: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Dec 23 14:22:04.055: INFO: 	Container kube-scheduler ready: true, restart count 7
Dec 23 14:22:04.055: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 23 14:22:04.055: INFO: 	Container coredns ready: true, restart count 0
Dec 23 14:22:04.055: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Dec 23 14:22:04.055: INFO: 	Container etcd ready: true, restart count 0
Dec 23 14:22:04.055: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Dec 23 14:22:04.055: INFO: 	Container weave ready: true, restart count 0
Dec 23 14:22:04.055: INFO: 	Container weave-npc ready: true, restart count 0
Dec 23 14:22:04.055: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 23 14:22:04.055: INFO: 	Container coredns ready: true, restart count 0
Dec 23 14:22:04.055: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Dec 23 14:22:04.055: INFO: 	Container kube-controller-manager ready: true, restart count 10
Dec 23 14:22:04.055: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Dec 23 14:22:04.055: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 23 14:22:04.055: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Dec 23 14:22:04.055: INFO: 	Container kube-apiserver ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-node
STEP: verifying the node has the label node iruya-server-sfge57q7djm7
Dec 23 14:22:04.225: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Dec 23 14:22:04.225: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Dec 23 14:22:04.225: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Dec 23 14:22:04.225: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7
Dec 23 14:22:04.225: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7
Dec 23 14:22:04.225: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Dec 23 14:22:04.225: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node
Dec 23 14:22:04.225: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Dec 23 14:22:04.225: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7
Dec 23 14:22:04.225: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-5fe22ec3-dfde-41e8-8ac7-9824df9940c5.15e306079b45626a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2829/filler-pod-5fe22ec3-dfde-41e8-8ac7-9824df9940c5 to iruya-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-5fe22ec3-dfde-41e8-8ac7-9824df9940c5.15e30608bbd45bde], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-5fe22ec3-dfde-41e8-8ac7-9824df9940c5.15e3060977ad4f82], Reason = [Created], Message = [Created container filler-pod-5fe22ec3-dfde-41e8-8ac7-9824df9940c5]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-5fe22ec3-dfde-41e8-8ac7-9824df9940c5.15e30609a3454f76], Reason = [Started], Message = [Started container filler-pod-5fe22ec3-dfde-41e8-8ac7-9824df9940c5]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-73bde498-9280-4a7d-9e6b-a21b20f4cbb9.15e306079c35c2d3], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2829/filler-pod-73bde498-9280-4a7d-9e6b-a21b20f4cbb9 to iruya-server-sfge57q7djm7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-73bde498-9280-4a7d-9e6b-a21b20f4cbb9.15e30608d2645792], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-73bde498-9280-4a7d-9e6b-a21b20f4cbb9.15e30609b61c42c3], Reason = [Created], Message = [Created container filler-pod-73bde498-9280-4a7d-9e6b-a21b20f4cbb9]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-73bde498-9280-4a7d-9e6b-a21b20f4cbb9.15e30609d422e166], Reason = [Started], Message = [Started container filler-pod-73bde498-9280-4a7d-9e6b-a21b20f4cbb9]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15e3060a6b7feeaf], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-server-sfge57q7djm7
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:22:17.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2829" for this suite.
Dec 23 14:22:24.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:22:25.055: INFO: namespace sched-pred-2829 deletion completed in 7.574641888s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:21.662 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:22:25.055: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Dec 23 14:22:25.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Dec 23 14:22:25.474: INFO: stderr: ""
Dec 23 14:22:25.474: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:22:25.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3306" for this suite.
Dec 23 14:22:31.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:22:31.659: INFO: namespace kubectl-3306 deletion completed in 6.17567207s

• [SLOW TEST:6.604 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:22:31.660: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 23 14:22:31.895: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c2ff5328-6e8d-4626-b099-02e23dae8c98" in namespace "downward-api-5266" to be "success or failure"
Dec 23 14:22:31.909: INFO: Pod "downwardapi-volume-c2ff5328-6e8d-4626-b099-02e23dae8c98": Phase="Pending", Reason="", readiness=false. Elapsed: 13.769753ms
Dec 23 14:22:33.928: INFO: Pod "downwardapi-volume-c2ff5328-6e8d-4626-b099-02e23dae8c98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032482537s
Dec 23 14:22:35.947: INFO: Pod "downwardapi-volume-c2ff5328-6e8d-4626-b099-02e23dae8c98": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051587131s
Dec 23 14:22:37.955: INFO: Pod "downwardapi-volume-c2ff5328-6e8d-4626-b099-02e23dae8c98": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059376734s
Dec 23 14:22:39.961: INFO: Pod "downwardapi-volume-c2ff5328-6e8d-4626-b099-02e23dae8c98": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065701614s
Dec 23 14:22:41.970: INFO: Pod "downwardapi-volume-c2ff5328-6e8d-4626-b099-02e23dae8c98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.074315327s
STEP: Saw pod success
Dec 23 14:22:41.970: INFO: Pod "downwardapi-volume-c2ff5328-6e8d-4626-b099-02e23dae8c98" satisfied condition "success or failure"
Dec 23 14:22:41.974: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-c2ff5328-6e8d-4626-b099-02e23dae8c98 container client-container: 
STEP: delete the pod
Dec 23 14:22:42.097: INFO: Waiting for pod downwardapi-volume-c2ff5328-6e8d-4626-b099-02e23dae8c98 to disappear
Dec 23 14:22:42.168: INFO: Pod downwardapi-volume-c2ff5328-6e8d-4626-b099-02e23dae8c98 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:22:42.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5266" for this suite.
Dec 23 14:22:48.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:22:48.371: INFO: namespace downward-api-5266 deletion completed in 6.195657105s

• [SLOW TEST:16.711 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:22:48.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 23 14:22:48.549: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Dec 23 14:22:52.108: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:22:52.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3207" for this suite.
Dec 23 14:23:06.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:23:06.876: INFO: namespace replication-controller-3207 deletion completed in 14.266367269s

• [SLOW TEST:18.504 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:23:06.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-4859
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Dec 23 14:23:07.031: INFO: Found 0 stateful pods, waiting for 3
Dec 23 14:23:17.043: INFO: Found 2 stateful pods, waiting for 3
Dec 23 14:23:27.044: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 23 14:23:27.045: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 23 14:23:27.045: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 23 14:23:37.044: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 23 14:23:37.044: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 23 14:23:37.044: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Dec 23 14:23:37.104: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Dec 23 14:23:47.218: INFO: Updating stateful set ss2
Dec 23 14:23:47.347: INFO: Waiting for Pod statefulset-4859/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Dec 23 14:23:57.845: INFO: Found 2 stateful pods, waiting for 3
Dec 23 14:24:07.867: INFO: Found 2 stateful pods, waiting for 3
Dec 23 14:24:17.864: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 23 14:24:17.864: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 23 14:24:17.864: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 23 14:24:27.864: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 23 14:24:27.864: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 23 14:24:27.864: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Dec 23 14:24:27.913: INFO: Updating stateful set ss2
Dec 23 14:24:27.951: INFO: Waiting for Pod statefulset-4859/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 23 14:24:37.979: INFO: Updating stateful set ss2
Dec 23 14:24:38.015: INFO: Waiting for StatefulSet statefulset-4859/ss2 to complete update
Dec 23 14:24:38.016: INFO: Waiting for Pod statefulset-4859/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 23 14:24:48.031: INFO: Waiting for StatefulSet statefulset-4859/ss2 to complete update
Dec 23 14:24:48.031: INFO: Waiting for Pod statefulset-4859/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 23 14:24:58.031: INFO: Waiting for StatefulSet statefulset-4859/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 23 14:25:08.036: INFO: Deleting all statefulset in ns statefulset-4859
Dec 23 14:25:08.044: INFO: Scaling statefulset ss2 to 0
Dec 23 14:25:38.129: INFO: Waiting for statefulset status.replicas updated to 0
Dec 23 14:25:38.134: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:25:38.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4859" for this suite.
Dec 23 14:25:46.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:25:46.329: INFO: namespace statefulset-4859 deletion completed in 8.161941673s

• [SLOW TEST:159.452 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:25:46.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Dec 23 14:25:55.121: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Dec 23 14:26:05.349: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:26:05.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9863" for this suite.
Dec 23 14:26:11.393: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:26:11.523: INFO: namespace pods-9863 deletion completed in 6.162781459s

• [SLOW TEST:25.193 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:26:11.524: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 23 14:26:11.630: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Dec 23 14:26:16.648: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 23 14:26:20.665: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 23 14:26:30.741: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-5565,SelfLink:/apis/apps/v1/namespaces/deployment-5565/deployments/test-cleanup-deployment,UID:d1b992b5-78c4-4c11-917e-a633a5146248,ResourceVersion:17775053,Generation:1,CreationTimestamp:2019-12-23 14:26:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 1,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-23 14:26:20 +0000 UTC 2019-12-23 14:26:20 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-23 14:26:29 +0000 UTC 2019-12-23 14:26:20 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-cleanup-deployment-55bbcbc84c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Dec 23 14:26:31.170: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-5565,SelfLink:/apis/apps/v1/namespaces/deployment-5565/replicasets/test-cleanup-deployment-55bbcbc84c,UID:bc10467d-fc63-43e8-a69d-c68a24d325dd,ResourceVersion:17775042,Generation:1,CreationTimestamp:2019-12-23 14:26:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment d1b992b5-78c4-4c11-917e-a633a5146248 0xc002503ae7 0xc002503ae8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 23 14:26:31.181: INFO: Pod "test-cleanup-deployment-55bbcbc84c-cwvh7" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-cwvh7,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-5565,SelfLink:/api/v1/namespaces/deployment-5565/pods/test-cleanup-deployment-55bbcbc84c-cwvh7,UID:27861ff4-2dd4-44d0-bdc1-cff008aeadd6,ResourceVersion:17775041,Generation:0,CreationTimestamp:2019-12-23 14:26:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c bc10467d-fc63-43e8-a69d-c68a24d325dd 0xc0028f8287 0xc0028f8288}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4gqst {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4gqst,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-4gqst true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028f8300} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028f8320}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:26:20 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:26:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:26:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:26:20 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2019-12-23 14:26:20 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-23 14:26:28 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://2ee17f2777cbf9f3cf0c3524910a0db6114060b829e129ef18d903098bf022ee}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:26:31.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5565" for this suite.
Dec 23 14:26:39.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:26:39.340: INFO: namespace deployment-5565 deletion completed in 8.149910927s

• [SLOW TEST:27.816 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:26:39.340: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Dec 23 14:26:40.055: INFO: created pod pod-service-account-defaultsa
Dec 23 14:26:40.055: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Dec 23 14:26:40.072: INFO: created pod pod-service-account-mountsa
Dec 23 14:26:40.072: INFO: pod pod-service-account-mountsa service account token volume mount: true
Dec 23 14:26:40.131: INFO: created pod pod-service-account-nomountsa
Dec 23 14:26:40.131: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Dec 23 14:26:40.259: INFO: created pod pod-service-account-defaultsa-mountspec
Dec 23 14:26:40.259: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Dec 23 14:26:40.320: INFO: created pod pod-service-account-mountsa-mountspec
Dec 23 14:26:40.320: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Dec 23 14:26:40.361: INFO: created pod pod-service-account-nomountsa-mountspec
Dec 23 14:26:40.361: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Dec 23 14:26:40.460: INFO: created pod pod-service-account-defaultsa-nomountspec
Dec 23 14:26:40.460: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Dec 23 14:26:41.228: INFO: created pod pod-service-account-mountsa-nomountspec
Dec 23 14:26:41.228: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Dec 23 14:26:41.652: INFO: created pod pod-service-account-nomountsa-nomountspec
Dec 23 14:26:41.653: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:26:41.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-4778" for this suite.
Dec 23 14:27:13.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:27:13.211: INFO: namespace svcaccounts-4778 deletion completed in 31.548653972s

• [SLOW TEST:33.871 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:27:13.212: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-3569/configmap-test-b9f98e10-b28d-4c64-bcaa-197bffa2385d
STEP: Creating a pod to test consume configMaps
Dec 23 14:27:13.296: INFO: Waiting up to 5m0s for pod "pod-configmaps-35f30bd1-b4f3-4463-904b-e8ef5ae5c67c" in namespace "configmap-3569" to be "success or failure"
Dec 23 14:27:13.355: INFO: Pod "pod-configmaps-35f30bd1-b4f3-4463-904b-e8ef5ae5c67c": Phase="Pending", Reason="", readiness=false. Elapsed: 58.431209ms
Dec 23 14:27:15.376: INFO: Pod "pod-configmaps-35f30bd1-b4f3-4463-904b-e8ef5ae5c67c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079667995s
Dec 23 14:27:17.393: INFO: Pod "pod-configmaps-35f30bd1-b4f3-4463-904b-e8ef5ae5c67c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096624173s
Dec 23 14:27:19.459: INFO: Pod "pod-configmaps-35f30bd1-b4f3-4463-904b-e8ef5ae5c67c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.162469739s
Dec 23 14:27:21.518: INFO: Pod "pod-configmaps-35f30bd1-b4f3-4463-904b-e8ef5ae5c67c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.221282608s
Dec 23 14:27:23.533: INFO: Pod "pod-configmaps-35f30bd1-b4f3-4463-904b-e8ef5ae5c67c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.236031853s
STEP: Saw pod success
Dec 23 14:27:23.533: INFO: Pod "pod-configmaps-35f30bd1-b4f3-4463-904b-e8ef5ae5c67c" satisfied condition "success or failure"
Dec 23 14:27:23.540: INFO: Trying to get logs from node iruya-node pod pod-configmaps-35f30bd1-b4f3-4463-904b-e8ef5ae5c67c container env-test: 
STEP: delete the pod
Dec 23 14:27:23.742: INFO: Waiting for pod pod-configmaps-35f30bd1-b4f3-4463-904b-e8ef5ae5c67c to disappear
Dec 23 14:27:23.797: INFO: Pod pod-configmaps-35f30bd1-b4f3-4463-904b-e8ef5ae5c67c no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:27:23.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3569" for this suite.
Dec 23 14:27:30.537: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:27:30.936: INFO: namespace configmap-3569 deletion completed in 7.124925328s

• [SLOW TEST:17.724 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:27:30.937: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 23 14:27:39.230: INFO: Waiting up to 5m0s for pod "client-envvars-e2083654-52ed-4c1b-9486-357e7affe8dc" in namespace "pods-3018" to be "success or failure"
Dec 23 14:27:39.349: INFO: Pod "client-envvars-e2083654-52ed-4c1b-9486-357e7affe8dc": Phase="Pending", Reason="", readiness=false. Elapsed: 118.668619ms
Dec 23 14:27:41.364: INFO: Pod "client-envvars-e2083654-52ed-4c1b-9486-357e7affe8dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133199335s
Dec 23 14:27:43.452: INFO: Pod "client-envvars-e2083654-52ed-4c1b-9486-357e7affe8dc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.221459084s
Dec 23 14:27:45.466: INFO: Pod "client-envvars-e2083654-52ed-4c1b-9486-357e7affe8dc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.236065092s
Dec 23 14:27:47.476: INFO: Pod "client-envvars-e2083654-52ed-4c1b-9486-357e7affe8dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.245300812s
STEP: Saw pod success
Dec 23 14:27:47.476: INFO: Pod "client-envvars-e2083654-52ed-4c1b-9486-357e7affe8dc" satisfied condition "success or failure"
Dec 23 14:27:47.480: INFO: Trying to get logs from node iruya-node pod client-envvars-e2083654-52ed-4c1b-9486-357e7affe8dc container env3cont: 
STEP: delete the pod
Dec 23 14:27:47.585: INFO: Waiting for pod client-envvars-e2083654-52ed-4c1b-9486-357e7affe8dc to disappear
Dec 23 14:27:47.618: INFO: Pod client-envvars-e2083654-52ed-4c1b-9486-357e7affe8dc no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:27:47.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3018" for this suite.
Dec 23 14:28:39.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:28:39.853: INFO: namespace pods-3018 deletion completed in 52.227514724s

• [SLOW TEST:68.916 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:28:39.854: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:28:52.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2438" for this suite.
Dec 23 14:28:58.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:28:58.569: INFO: namespace kubelet-test-2438 deletion completed in 6.210556798s

• [SLOW TEST:18.715 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:28:58.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Dec 23 14:29:10.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-966e114e-9017-41db-9796-e8d7c1b161b8 -c busybox-main-container --namespace=emptydir-4257 -- cat /usr/share/volumeshare/shareddata.txt'
Dec 23 14:29:13.195: INFO: stderr: ""
Dec 23 14:29:13.195: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:29:13.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4257" for this suite.
Dec 23 14:29:19.225: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:29:19.412: INFO: namespace emptydir-4257 deletion completed in 6.209810344s

• [SLOW TEST:20.840 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:29:19.412: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-6243/configmap-test-e5aa1468-61ac-47db-bfbb-2a55052bc78c
STEP: Creating a pod to test consume configMaps
Dec 23 14:29:19.614: INFO: Waiting up to 5m0s for pod "pod-configmaps-40ab482c-0a50-43a3-8467-e6b2c32e7aa7" in namespace "configmap-6243" to be "success or failure"
Dec 23 14:29:19.618: INFO: Pod "pod-configmaps-40ab482c-0a50-43a3-8467-e6b2c32e7aa7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.747366ms
Dec 23 14:29:21.632: INFO: Pod "pod-configmaps-40ab482c-0a50-43a3-8467-e6b2c32e7aa7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017857922s
Dec 23 14:29:23.650: INFO: Pod "pod-configmaps-40ab482c-0a50-43a3-8467-e6b2c32e7aa7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035280434s
Dec 23 14:29:25.660: INFO: Pod "pod-configmaps-40ab482c-0a50-43a3-8467-e6b2c32e7aa7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045935888s
Dec 23 14:29:27.674: INFO: Pod "pod-configmaps-40ab482c-0a50-43a3-8467-e6b2c32e7aa7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059021375s
Dec 23 14:29:29.685: INFO: Pod "pod-configmaps-40ab482c-0a50-43a3-8467-e6b2c32e7aa7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.070681464s
STEP: Saw pod success
Dec 23 14:29:29.685: INFO: Pod "pod-configmaps-40ab482c-0a50-43a3-8467-e6b2c32e7aa7" satisfied condition "success or failure"
Dec 23 14:29:29.688: INFO: Trying to get logs from node iruya-node pod pod-configmaps-40ab482c-0a50-43a3-8467-e6b2c32e7aa7 container env-test: 
STEP: delete the pod
Dec 23 14:29:29.826: INFO: Waiting for pod pod-configmaps-40ab482c-0a50-43a3-8467-e6b2c32e7aa7 to disappear
Dec 23 14:29:29.835: INFO: Pod pod-configmaps-40ab482c-0a50-43a3-8467-e6b2c32e7aa7 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:29:29.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6243" for this suite.
Dec 23 14:29:35.873: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:29:36.018: INFO: namespace configmap-6243 deletion completed in 6.177069778s

• [SLOW TEST:16.606 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:29:36.020: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 23 14:30:04.170: INFO: Container started at 2019-12-23 14:29:42 +0000 UTC, pod became ready at 2019-12-23 14:30:03 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:30:04.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2371" for this suite.
Dec 23 14:30:26.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:30:26.375: INFO: namespace container-probe-2371 deletion completed in 22.199293476s

• [SLOW TEST:50.355 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:30:26.376: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 23 14:30:26.502: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f9377d7a-18b2-4629-bc0d-4e347f7bec39" in namespace "projected-2809" to be "success or failure"
Dec 23 14:30:27.197: INFO: Pod "downwardapi-volume-f9377d7a-18b2-4629-bc0d-4e347f7bec39": Phase="Pending", Reason="", readiness=false. Elapsed: 694.691199ms
Dec 23 14:30:29.209: INFO: Pod "downwardapi-volume-f9377d7a-18b2-4629-bc0d-4e347f7bec39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.707271395s
Dec 23 14:30:31.220: INFO: Pod "downwardapi-volume-f9377d7a-18b2-4629-bc0d-4e347f7bec39": Phase="Pending", Reason="", readiness=false. Elapsed: 4.71821061s
Dec 23 14:30:33.230: INFO: Pod "downwardapi-volume-f9377d7a-18b2-4629-bc0d-4e347f7bec39": Phase="Pending", Reason="", readiness=false. Elapsed: 6.727813205s
Dec 23 14:30:35.239: INFO: Pod "downwardapi-volume-f9377d7a-18b2-4629-bc0d-4e347f7bec39": Phase="Pending", Reason="", readiness=false. Elapsed: 8.737313772s
Dec 23 14:30:37.248: INFO: Pod "downwardapi-volume-f9377d7a-18b2-4629-bc0d-4e347f7bec39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.745433127s
STEP: Saw pod success
Dec 23 14:30:37.248: INFO: Pod "downwardapi-volume-f9377d7a-18b2-4629-bc0d-4e347f7bec39" satisfied condition "success or failure"
Dec 23 14:30:37.252: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-f9377d7a-18b2-4629-bc0d-4e347f7bec39 container client-container: 
STEP: delete the pod
Dec 23 14:30:37.435: INFO: Waiting for pod downwardapi-volume-f9377d7a-18b2-4629-bc0d-4e347f7bec39 to disappear
Dec 23 14:30:37.441: INFO: Pod downwardapi-volume-f9377d7a-18b2-4629-bc0d-4e347f7bec39 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:30:37.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2809" for this suite.
Dec 23 14:30:43.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:30:43.668: INFO: namespace projected-2809 deletion completed in 6.21999334s

• [SLOW TEST:17.292 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:30:43.668: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-6llq
STEP: Creating a pod to test atomic-volume-subpath
Dec 23 14:30:43.937: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-6llq" in namespace "subpath-9155" to be "success or failure"
Dec 23 14:30:43.965: INFO: Pod "pod-subpath-test-configmap-6llq": Phase="Pending", Reason="", readiness=false. Elapsed: 28.082989ms
Dec 23 14:30:45.973: INFO: Pod "pod-subpath-test-configmap-6llq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035698637s
Dec 23 14:30:47.984: INFO: Pod "pod-subpath-test-configmap-6llq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046274087s
Dec 23 14:30:50.000: INFO: Pod "pod-subpath-test-configmap-6llq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062374714s
Dec 23 14:30:52.014: INFO: Pod "pod-subpath-test-configmap-6llq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.07660622s
Dec 23 14:30:54.022: INFO: Pod "pod-subpath-test-configmap-6llq": Phase="Running", Reason="", readiness=true. Elapsed: 10.084665961s
Dec 23 14:30:56.029: INFO: Pod "pod-subpath-test-configmap-6llq": Phase="Running", Reason="", readiness=true. Elapsed: 12.091259435s
Dec 23 14:30:58.039: INFO: Pod "pod-subpath-test-configmap-6llq": Phase="Running", Reason="", readiness=true. Elapsed: 14.101620402s
Dec 23 14:31:00.048: INFO: Pod "pod-subpath-test-configmap-6llq": Phase="Running", Reason="", readiness=true. Elapsed: 16.111049425s
Dec 23 14:31:02.055: INFO: Pod "pod-subpath-test-configmap-6llq": Phase="Running", Reason="", readiness=true. Elapsed: 18.11809044s
Dec 23 14:31:04.064: INFO: Pod "pod-subpath-test-configmap-6llq": Phase="Running", Reason="", readiness=true. Elapsed: 20.126239299s
Dec 23 14:31:06.076: INFO: Pod "pod-subpath-test-configmap-6llq": Phase="Running", Reason="", readiness=true. Elapsed: 22.138442512s
Dec 23 14:31:08.091: INFO: Pod "pod-subpath-test-configmap-6llq": Phase="Running", Reason="", readiness=true. Elapsed: 24.153404855s
Dec 23 14:31:10.103: INFO: Pod "pod-subpath-test-configmap-6llq": Phase="Running", Reason="", readiness=true. Elapsed: 26.165660014s
Dec 23 14:31:12.112: INFO: Pod "pod-subpath-test-configmap-6llq": Phase="Running", Reason="", readiness=true. Elapsed: 28.175195827s
Dec 23 14:31:14.126: INFO: Pod "pod-subpath-test-configmap-6llq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.188388103s
STEP: Saw pod success
Dec 23 14:31:14.126: INFO: Pod "pod-subpath-test-configmap-6llq" satisfied condition "success or failure"
Dec 23 14:31:14.136: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-6llq container test-container-subpath-configmap-6llq: 
STEP: delete the pod
Dec 23 14:31:14.337: INFO: Waiting for pod pod-subpath-test-configmap-6llq to disappear
Dec 23 14:31:14.344: INFO: Pod pod-subpath-test-configmap-6llq no longer exists
STEP: Deleting pod pod-subpath-test-configmap-6llq
Dec 23 14:31:14.344: INFO: Deleting pod "pod-subpath-test-configmap-6llq" in namespace "subpath-9155"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:31:14.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9155" for this suite.
Dec 23 14:31:20.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:31:20.585: INFO: namespace subpath-9155 deletion completed in 6.20599832s

• [SLOW TEST:36.917 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:31:20.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 23 14:31:20.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:31:31.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8734" for this suite.
Dec 23 14:32:37.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:32:37.386: INFO: namespace pods-8734 deletion completed in 1m6.161419181s

• [SLOW TEST:76.799 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:32:37.387: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 23 14:32:53.598: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 23 14:32:53.613: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 23 14:32:55.614: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 23 14:32:55.627: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 23 14:32:57.615: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 23 14:32:57.629: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 23 14:32:59.615: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 23 14:32:59.632: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 23 14:33:01.614: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 23 14:33:01.628: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 23 14:33:03.615: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 23 14:33:03.634: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 23 14:33:05.615: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 23 14:33:05.623: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 23 14:33:07.615: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 23 14:33:07.626: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 23 14:33:09.614: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 23 14:33:09.626: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 23 14:33:11.615: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 23 14:33:11.641: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 23 14:33:13.615: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 23 14:33:13.636: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 23 14:33:15.615: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 23 14:33:15.624: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:33:15.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7148" for this suite.
Dec 23 14:33:37.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:33:37.838: INFO: namespace container-lifecycle-hook-7148 deletion completed in 22.162970986s

• [SLOW TEST:60.451 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:33:37.838: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-dd6ee97d-4a17-4f6d-ac2d-3fb21ce1d7ab
STEP: Creating configMap with name cm-test-opt-upd-934fad5d-fc66-4422-b44f-3e1e21241105
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-dd6ee97d-4a17-4f6d-ac2d-3fb21ce1d7ab
STEP: Updating configmap cm-test-opt-upd-934fad5d-fc66-4422-b44f-3e1e21241105
STEP: Creating configMap with name cm-test-opt-create-2e37b100-ff99-4a0e-91ec-031414eb50fd
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:33:54.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4502" for this suite.
Dec 23 14:34:32.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:34:32.640: INFO: namespace projected-4502 deletion completed in 38.267855304s

• [SLOW TEST:54.802 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:34:32.642: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Dec 23 14:34:41.834: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:34:42.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-2017" for this suite.
Dec 23 14:35:04.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:35:04.994: INFO: namespace replicaset-2017 deletion completed in 22.103735339s

• [SLOW TEST:32.352 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:35:04.994: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 23 14:35:05.088: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c3d1d622-2b73-497a-be6a-bccc593c4026" in namespace "downward-api-480" to be "success or failure"
Dec 23 14:35:05.158: INFO: Pod "downwardapi-volume-c3d1d622-2b73-497a-be6a-bccc593c4026": Phase="Pending", Reason="", readiness=false. Elapsed: 70.192915ms
Dec 23 14:35:07.166: INFO: Pod "downwardapi-volume-c3d1d622-2b73-497a-be6a-bccc593c4026": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078160821s
Dec 23 14:35:09.175: INFO: Pod "downwardapi-volume-c3d1d622-2b73-497a-be6a-bccc593c4026": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086789291s
Dec 23 14:35:11.183: INFO: Pod "downwardapi-volume-c3d1d622-2b73-497a-be6a-bccc593c4026": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094786684s
Dec 23 14:35:13.197: INFO: Pod "downwardapi-volume-c3d1d622-2b73-497a-be6a-bccc593c4026": Phase="Running", Reason="", readiness=true. Elapsed: 8.108866493s
Dec 23 14:35:15.204: INFO: Pod "downwardapi-volume-c3d1d622-2b73-497a-be6a-bccc593c4026": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.116242615s
STEP: Saw pod success
Dec 23 14:35:15.204: INFO: Pod "downwardapi-volume-c3d1d622-2b73-497a-be6a-bccc593c4026" satisfied condition "success or failure"
Dec 23 14:35:15.208: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-c3d1d622-2b73-497a-be6a-bccc593c4026 container client-container: 
STEP: delete the pod
Dec 23 14:35:15.267: INFO: Waiting for pod downwardapi-volume-c3d1d622-2b73-497a-be6a-bccc593c4026 to disappear
Dec 23 14:35:15.294: INFO: Pod downwardapi-volume-c3d1d622-2b73-497a-be6a-bccc593c4026 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:35:15.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-480" for this suite.
Dec 23 14:35:21.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:35:21.474: INFO: namespace downward-api-480 deletion completed in 6.169334592s

• [SLOW TEST:16.480 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:35:21.476: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Dec 23 14:35:21.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9248'
Dec 23 14:35:21.970: INFO: stderr: ""
Dec 23 14:35:21.970: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 23 14:35:21.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9248'
Dec 23 14:35:22.196: INFO: stderr: ""
Dec 23 14:35:22.196: INFO: stdout: "update-demo-nautilus-jk99g update-demo-nautilus-r6nv8 "
Dec 23 14:35:22.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jk99g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9248'
Dec 23 14:35:22.404: INFO: stderr: ""
Dec 23 14:35:22.404: INFO: stdout: ""
Dec 23 14:35:22.404: INFO: update-demo-nautilus-jk99g is created but not running
Dec 23 14:35:27.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9248'
Dec 23 14:35:27.618: INFO: stderr: ""
Dec 23 14:35:27.618: INFO: stdout: "update-demo-nautilus-jk99g update-demo-nautilus-r6nv8 "
Dec 23 14:35:27.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jk99g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9248'
Dec 23 14:35:29.129: INFO: stderr: ""
Dec 23 14:35:29.130: INFO: stdout: ""
Dec 23 14:35:29.130: INFO: update-demo-nautilus-jk99g is created but not running
Dec 23 14:35:34.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9248'
Dec 23 14:35:34.253: INFO: stderr: ""
Dec 23 14:35:34.253: INFO: stdout: "update-demo-nautilus-jk99g update-demo-nautilus-r6nv8 "
Dec 23 14:35:34.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jk99g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9248'
Dec 23 14:35:34.346: INFO: stderr: ""
Dec 23 14:35:34.346: INFO: stdout: "true"
Dec 23 14:35:34.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jk99g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9248'
Dec 23 14:35:34.430: INFO: stderr: ""
Dec 23 14:35:34.430: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 23 14:35:34.431: INFO: validating pod update-demo-nautilus-jk99g
Dec 23 14:35:34.439: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 23 14:35:34.439: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 23 14:35:34.439: INFO: update-demo-nautilus-jk99g is verified up and running
Dec 23 14:35:34.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r6nv8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9248'
Dec 23 14:35:34.536: INFO: stderr: ""
Dec 23 14:35:34.537: INFO: stdout: "true"
Dec 23 14:35:34.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r6nv8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9248'
Dec 23 14:35:34.653: INFO: stderr: ""
Dec 23 14:35:34.653: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 23 14:35:34.653: INFO: validating pod update-demo-nautilus-r6nv8
Dec 23 14:35:34.675: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 23 14:35:34.675: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 23 14:35:34.675: INFO: update-demo-nautilus-r6nv8 is verified up and running
STEP: rolling-update to new replication controller
Dec 23 14:35:34.678: INFO: scanned /root for discovery docs: 
Dec 23 14:35:34.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-9248'
Dec 23 14:36:07.247: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 23 14:36:07.247: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 23 14:36:07.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9248'
Dec 23 14:36:07.388: INFO: stderr: ""
Dec 23 14:36:07.388: INFO: stdout: "update-demo-kitten-bqvxt update-demo-kitten-dhsjn update-demo-nautilus-jk99g "
STEP: Replicas for name=update-demo: expected=2 actual=3
Dec 23 14:36:12.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9248'
Dec 23 14:36:12.649: INFO: stderr: ""
Dec 23 14:36:12.649: INFO: stdout: "update-demo-kitten-bqvxt update-demo-kitten-dhsjn "
Dec 23 14:36:12.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-bqvxt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9248'
Dec 23 14:36:12.760: INFO: stderr: ""
Dec 23 14:36:12.760: INFO: stdout: "true"
Dec 23 14:36:12.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-bqvxt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9248'
Dec 23 14:36:12.925: INFO: stderr: ""
Dec 23 14:36:12.925: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 23 14:36:12.925: INFO: validating pod update-demo-kitten-bqvxt
Dec 23 14:36:12.948: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 23 14:36:12.948: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 23 14:36:12.948: INFO: update-demo-kitten-bqvxt is verified up and running
Dec 23 14:36:12.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-dhsjn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9248'
Dec 23 14:36:13.094: INFO: stderr: ""
Dec 23 14:36:13.094: INFO: stdout: "true"
Dec 23 14:36:13.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-dhsjn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9248'
Dec 23 14:36:13.203: INFO: stderr: ""
Dec 23 14:36:13.204: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 23 14:36:13.204: INFO: validating pod update-demo-kitten-dhsjn
Dec 23 14:36:13.717: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 23 14:36:13.718: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 23 14:36:13.718: INFO: update-demo-kitten-dhsjn is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:36:13.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9248" for this suite.
Dec 23 14:36:35.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:36:35.940: INFO: namespace kubectl-9248 deletion completed in 22.201367291s

• [SLOW TEST:74.464 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:36:35.941: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Dec 23 14:36:36.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7757'
Dec 23 14:36:36.371: INFO: stderr: ""
Dec 23 14:36:36.371: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 23 14:36:37.386: INFO: Selector matched 1 pods for map[app:redis]
Dec 23 14:36:37.386: INFO: Found 0 / 1
Dec 23 14:36:38.381: INFO: Selector matched 1 pods for map[app:redis]
Dec 23 14:36:38.381: INFO: Found 0 / 1
Dec 23 14:36:39.388: INFO: Selector matched 1 pods for map[app:redis]
Dec 23 14:36:39.388: INFO: Found 0 / 1
Dec 23 14:36:40.380: INFO: Selector matched 1 pods for map[app:redis]
Dec 23 14:36:40.380: INFO: Found 0 / 1
Dec 23 14:36:41.986: INFO: Selector matched 1 pods for map[app:redis]
Dec 23 14:36:41.987: INFO: Found 0 / 1
Dec 23 14:36:42.381: INFO: Selector matched 1 pods for map[app:redis]
Dec 23 14:36:42.382: INFO: Found 0 / 1
Dec 23 14:36:43.382: INFO: Selector matched 1 pods for map[app:redis]
Dec 23 14:36:43.383: INFO: Found 0 / 1
Dec 23 14:36:44.379: INFO: Selector matched 1 pods for map[app:redis]
Dec 23 14:36:44.379: INFO: Found 0 / 1
Dec 23 14:36:45.389: INFO: Selector matched 1 pods for map[app:redis]
Dec 23 14:36:45.390: INFO: Found 1 / 1
Dec 23 14:36:45.390: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Dec 23 14:36:45.395: INFO: Selector matched 1 pods for map[app:redis]
Dec 23 14:36:45.396: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 23 14:36:45.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-ww6pc --namespace=kubectl-7757 -p {"metadata":{"annotations":{"x":"y"}}}'
Dec 23 14:36:45.591: INFO: stderr: ""
Dec 23 14:36:45.591: INFO: stdout: "pod/redis-master-ww6pc patched\n"
STEP: checking annotations
Dec 23 14:36:45.621: INFO: Selector matched 1 pods for map[app:redis]
Dec 23 14:36:45.621: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:36:45.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7757" for this suite.
Dec 23 14:37:07.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:37:07.808: INFO: namespace kubectl-7757 deletion completed in 22.183059777s

• [SLOW TEST:31.868 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:37:07.809: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Dec 23 14:37:07.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Dec 23 14:37:08.076: INFO: stderr: ""
Dec 23 14:37:08.076: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:37:08.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8431" for this suite.
Dec 23 14:37:14.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:37:14.203: INFO: namespace kubectl-8431 deletion completed in 6.120050828s

• [SLOW TEST:6.394 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:37:14.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 23 14:37:14.959: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 42.476876ms)
Dec 23 14:37:14.977: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.96651ms)
Dec 23 14:37:14.986: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.688848ms)
Dec 23 14:37:14.999: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.927168ms)
Dec 23 14:37:15.008: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.669524ms)
Dec 23 14:37:15.046: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 37.206633ms)
Dec 23 14:37:15.065: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 18.86854ms)
Dec 23 14:37:15.082: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.160678ms)
Dec 23 14:37:15.094: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.219472ms)
Dec 23 14:37:15.099: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.586264ms)
Dec 23 14:37:15.105: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.143572ms)
Dec 23 14:37:15.109: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.609613ms)
Dec 23 14:37:15.113: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.529607ms)
Dec 23 14:37:15.116: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.098288ms)
Dec 23 14:37:15.119: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.181933ms)
Dec 23 14:37:15.124: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.605766ms)
Dec 23 14:37:15.129: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.764931ms)
Dec 23 14:37:15.135: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.332099ms)
Dec 23 14:37:15.141: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.993751ms)
Dec 23 14:37:15.146: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.358605ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:37:15.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-3991" for this suite.
Dec 23 14:37:21.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:37:21.361: INFO: namespace proxy-3991 deletion completed in 6.208660921s

• [SLOW TEST:7.157 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:37:21.361: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 23 14:37:21.501: INFO: Creating deployment "test-recreate-deployment"
Dec 23 14:37:21.509: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Dec 23 14:37:21.528: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Dec 23 14:37:23.543: INFO: Waiting deployment "test-recreate-deployment" to complete
Dec 23 14:37:23.547: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712708641, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712708641, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712708641, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712708641, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 23 14:37:25.566: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712708641, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712708641, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712708641, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712708641, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 23 14:37:27.554: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712708641, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712708641, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712708641, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712708641, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 23 14:37:29.555: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Dec 23 14:37:29.574: INFO: Updating deployment test-recreate-deployment
Dec 23 14:37:29.574: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 23 14:37:30.164: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-7894,SelfLink:/apis/apps/v1/namespaces/deployment-7894/deployments/test-recreate-deployment,UID:9842801e-f753-4b6e-b657-2c830e913719,ResourceVersion:17776667,Generation:2,CreationTimestamp:2019-12-23 14:37:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2019-12-23 14:37:30 +0000 UTC 2019-12-23 14:37:30 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-12-23 14:37:30 +0000 UTC 2019-12-23 14:37:21 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Dec 23 14:37:30.175: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-7894,SelfLink:/apis/apps/v1/namespaces/deployment-7894/replicasets/test-recreate-deployment-5c8c9cc69d,UID:03ce181f-4646-4324-909e-ffc5298c771e,ResourceVersion:17776665,Generation:1,CreationTimestamp:2019-12-23 14:37:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 9842801e-f753-4b6e-b657-2c830e913719 0xc0005d0dd7 0xc0005d0dd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 23 14:37:30.175: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Dec 23 14:37:30.175: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-7894,SelfLink:/apis/apps/v1/namespaces/deployment-7894/replicasets/test-recreate-deployment-6df85df6b9,UID:9db66e64-7f3a-43e4-8ce7-66a81c226be0,ResourceVersion:17776655,Generation:2,CreationTimestamp:2019-12-23 14:37:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 9842801e-f753-4b6e-b657-2c830e913719 0xc0005d0ea7 0xc0005d0ea8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 23 14:37:30.180: INFO: Pod "test-recreate-deployment-5c8c9cc69d-hsskh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-hsskh,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-7894,SelfLink:/api/v1/namespaces/deployment-7894/pods/test-recreate-deployment-5c8c9cc69d-hsskh,UID:2b74dc01-b581-434a-b0ca-469fa82ab9d3,ResourceVersion:17776668,Generation:0,CreationTimestamp:2019-12-23 14:37:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 03ce181f-4646-4324-909e-ffc5298c771e 0xc0027d6257 0xc0027d6258}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ndz8q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ndz8q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ndz8q true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027d62d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027d62f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:37:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:37:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:37:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:37:30 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-23 14:37:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:37:30.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7894" for this suite.
Dec 23 14:37:36.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:37:36.328: INFO: namespace deployment-7894 deletion completed in 6.141341459s

• [SLOW TEST:14.967 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:37:36.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Dec 23 14:37:45.792: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:37:45.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1133" for this suite.
Dec 23 14:37:51.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:37:52.043: INFO: namespace container-runtime-1133 deletion completed in 6.142503376s

• [SLOW TEST:15.714 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:37:52.044: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-88bfd331-d6b8-49b1-9ba6-10d8223ac8c7
STEP: Creating a pod to test consume secrets
Dec 23 14:37:52.265: INFO: Waiting up to 5m0s for pod "pod-secrets-1c95cc0c-220d-40ae-8613-fe107bf265f9" in namespace "secrets-194" to be "success or failure"
Dec 23 14:37:52.298: INFO: Pod "pod-secrets-1c95cc0c-220d-40ae-8613-fe107bf265f9": Phase="Pending", Reason="", readiness=false. Elapsed: 32.262986ms
Dec 23 14:37:54.306: INFO: Pod "pod-secrets-1c95cc0c-220d-40ae-8613-fe107bf265f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040114696s
Dec 23 14:37:56.319: INFO: Pod "pod-secrets-1c95cc0c-220d-40ae-8613-fe107bf265f9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053615846s
Dec 23 14:37:58.330: INFO: Pod "pod-secrets-1c95cc0c-220d-40ae-8613-fe107bf265f9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063886352s
Dec 23 14:38:00.339: INFO: Pod "pod-secrets-1c95cc0c-220d-40ae-8613-fe107bf265f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.072964256s
STEP: Saw pod success
Dec 23 14:38:00.339: INFO: Pod "pod-secrets-1c95cc0c-220d-40ae-8613-fe107bf265f9" satisfied condition "success or failure"
Dec 23 14:38:00.343: INFO: Trying to get logs from node iruya-node pod pod-secrets-1c95cc0c-220d-40ae-8613-fe107bf265f9 container secret-volume-test: 
STEP: delete the pod
Dec 23 14:38:00.655: INFO: Waiting for pod pod-secrets-1c95cc0c-220d-40ae-8613-fe107bf265f9 to disappear
Dec 23 14:38:00.664: INFO: Pod pod-secrets-1c95cc0c-220d-40ae-8613-fe107bf265f9 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:38:00.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-194" for this suite.
Dec 23 14:38:06.701: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:38:06.798: INFO: namespace secrets-194 deletion completed in 6.127237152s

• [SLOW TEST:14.754 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:38:06.799: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 23 14:38:06.876: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9d34ab26-0471-4eb2-9b18-8f628797275a" in namespace "projected-6539" to be "success or failure"
Dec 23 14:38:06.944: INFO: Pod "downwardapi-volume-9d34ab26-0471-4eb2-9b18-8f628797275a": Phase="Pending", Reason="", readiness=false. Elapsed: 66.983577ms
Dec 23 14:38:08.954: INFO: Pod "downwardapi-volume-9d34ab26-0471-4eb2-9b18-8f628797275a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076936627s
Dec 23 14:38:10.962: INFO: Pod "downwardapi-volume-9d34ab26-0471-4eb2-9b18-8f628797275a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085578415s
Dec 23 14:38:12.977: INFO: Pod "downwardapi-volume-9d34ab26-0471-4eb2-9b18-8f628797275a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.100489886s
Dec 23 14:38:14.991: INFO: Pod "downwardapi-volume-9d34ab26-0471-4eb2-9b18-8f628797275a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.114797496s
STEP: Saw pod success
Dec 23 14:38:14.992: INFO: Pod "downwardapi-volume-9d34ab26-0471-4eb2-9b18-8f628797275a" satisfied condition "success or failure"
Dec 23 14:38:14.997: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-9d34ab26-0471-4eb2-9b18-8f628797275a container client-container: 
STEP: delete the pod
Dec 23 14:38:15.052: INFO: Waiting for pod downwardapi-volume-9d34ab26-0471-4eb2-9b18-8f628797275a to disappear
Dec 23 14:38:15.067: INFO: Pod downwardapi-volume-9d34ab26-0471-4eb2-9b18-8f628797275a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:38:15.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6539" for this suite.
Dec 23 14:38:21.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:38:21.384: INFO: namespace projected-6539 deletion completed in 6.307489059s

• [SLOW TEST:14.586 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:38:21.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 23 14:38:21.460: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6ad13fe8-da0c-40f0-8cf9-d90a172ac1bd" in namespace "projected-2281" to be "success or failure"
Dec 23 14:38:21.516: INFO: Pod "downwardapi-volume-6ad13fe8-da0c-40f0-8cf9-d90a172ac1bd": Phase="Pending", Reason="", readiness=false. Elapsed: 55.884482ms
Dec 23 14:38:23.527: INFO: Pod "downwardapi-volume-6ad13fe8-da0c-40f0-8cf9-d90a172ac1bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066533205s
Dec 23 14:38:25.539: INFO: Pod "downwardapi-volume-6ad13fe8-da0c-40f0-8cf9-d90a172ac1bd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078349252s
Dec 23 14:38:27.550: INFO: Pod "downwardapi-volume-6ad13fe8-da0c-40f0-8cf9-d90a172ac1bd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.089321279s
Dec 23 14:38:29.560: INFO: Pod "downwardapi-volume-6ad13fe8-da0c-40f0-8cf9-d90a172ac1bd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.099699975s
Dec 23 14:38:31.569: INFO: Pod "downwardapi-volume-6ad13fe8-da0c-40f0-8cf9-d90a172ac1bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.108184595s
STEP: Saw pod success
Dec 23 14:38:31.569: INFO: Pod "downwardapi-volume-6ad13fe8-da0c-40f0-8cf9-d90a172ac1bd" satisfied condition "success or failure"
Dec 23 14:38:31.574: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-6ad13fe8-da0c-40f0-8cf9-d90a172ac1bd container client-container: 
STEP: delete the pod
Dec 23 14:38:31.712: INFO: Waiting for pod downwardapi-volume-6ad13fe8-da0c-40f0-8cf9-d90a172ac1bd to disappear
Dec 23 14:38:31.725: INFO: Pod downwardapi-volume-6ad13fe8-da0c-40f0-8cf9-d90a172ac1bd no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:38:31.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2281" for this suite.
Dec 23 14:38:37.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:38:38.031: INFO: namespace projected-2281 deletion completed in 6.23095513s

• [SLOW TEST:16.647 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:38:38.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-29ea430a-8451-4959-8ca3-27a5ce29a07d
STEP: Creating a pod to test consume configMaps
Dec 23 14:38:38.172: INFO: Waiting up to 5m0s for pod "pod-configmaps-755e5809-42a9-4041-ae18-88332f599cef" in namespace "configmap-8952" to be "success or failure"
Dec 23 14:38:38.177: INFO: Pod "pod-configmaps-755e5809-42a9-4041-ae18-88332f599cef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.723625ms
Dec 23 14:38:40.188: INFO: Pod "pod-configmaps-755e5809-42a9-4041-ae18-88332f599cef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015402538s
Dec 23 14:38:42.224: INFO: Pod "pod-configmaps-755e5809-42a9-4041-ae18-88332f599cef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050955604s
Dec 23 14:38:44.233: INFO: Pod "pod-configmaps-755e5809-42a9-4041-ae18-88332f599cef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060054266s
Dec 23 14:38:46.247: INFO: Pod "pod-configmaps-755e5809-42a9-4041-ae18-88332f599cef": Phase="Pending", Reason="", readiness=false. Elapsed: 8.074729718s
Dec 23 14:38:48.260: INFO: Pod "pod-configmaps-755e5809-42a9-4041-ae18-88332f599cef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.087201178s
STEP: Saw pod success
Dec 23 14:38:48.260: INFO: Pod "pod-configmaps-755e5809-42a9-4041-ae18-88332f599cef" satisfied condition "success or failure"
Dec 23 14:38:48.269: INFO: Trying to get logs from node iruya-node pod pod-configmaps-755e5809-42a9-4041-ae18-88332f599cef container configmap-volume-test: 
STEP: delete the pod
Dec 23 14:38:48.381: INFO: Waiting for pod pod-configmaps-755e5809-42a9-4041-ae18-88332f599cef to disappear
Dec 23 14:38:48.400: INFO: Pod pod-configmaps-755e5809-42a9-4041-ae18-88332f599cef no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:38:48.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8952" for this suite.
Dec 23 14:38:54.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:38:54.636: INFO: namespace configmap-8952 deletion completed in 6.226198047s

• [SLOW TEST:16.604 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:38:54.640: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-31556479-bf85-4e6f-a611-27b6ccc7e73e
STEP: Creating a pod to test consume configMaps
Dec 23 14:38:54.828: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-554af77e-1ced-4905-8940-5b950ef4ef36" in namespace "projected-5867" to be "success or failure"
Dec 23 14:38:54.843: INFO: Pod "pod-projected-configmaps-554af77e-1ced-4905-8940-5b950ef4ef36": Phase="Pending", Reason="", readiness=false. Elapsed: 15.457362ms
Dec 23 14:38:56.854: INFO: Pod "pod-projected-configmaps-554af77e-1ced-4905-8940-5b950ef4ef36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026363859s
Dec 23 14:38:58.866: INFO: Pod "pod-projected-configmaps-554af77e-1ced-4905-8940-5b950ef4ef36": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037874891s
Dec 23 14:39:00.878: INFO: Pod "pod-projected-configmaps-554af77e-1ced-4905-8940-5b950ef4ef36": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050343168s
Dec 23 14:39:02.894: INFO: Pod "pod-projected-configmaps-554af77e-1ced-4905-8940-5b950ef4ef36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.065726337s
STEP: Saw pod success
Dec 23 14:39:02.894: INFO: Pod "pod-projected-configmaps-554af77e-1ced-4905-8940-5b950ef4ef36" satisfied condition "success or failure"
Dec 23 14:39:02.900: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-554af77e-1ced-4905-8940-5b950ef4ef36 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 23 14:39:02.950: INFO: Waiting for pod pod-projected-configmaps-554af77e-1ced-4905-8940-5b950ef4ef36 to disappear
Dec 23 14:39:02.954: INFO: Pod pod-projected-configmaps-554af77e-1ced-4905-8940-5b950ef4ef36 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:39:02.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5867" for this suite.
Dec 23 14:39:08.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:39:09.094: INFO: namespace projected-5867 deletion completed in 6.130073526s

• [SLOW TEST:14.454 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:39:09.095: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-4658
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 23 14:39:09.163: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 23 14:39:49.417: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4658 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 14:39:49.417: INFO: >>> kubeConfig: /root/.kube/config
Dec 23 14:39:49.876: INFO: Found all expected endpoints: [netserver-0]
Dec 23 14:39:49.888: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4658 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 14:39:49.888: INFO: >>> kubeConfig: /root/.kube/config
Dec 23 14:39:50.202: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:39:50.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-4658" for this suite.
Dec 23 14:40:14.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:40:14.355: INFO: namespace pod-network-test-4658 deletion completed in 24.13698344s

• [SLOW TEST:65.261 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:40:14.357: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-9383
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-9383
STEP: Deleting pre-stop pod
Dec 23 14:40:35.678: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:40:35.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-9383" for this suite.
Dec 23 14:41:17.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:41:18.008: INFO: namespace prestop-9383 deletion completed in 42.306781826s

• [SLOW TEST:63.651 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:41:18.009: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Dec 23 14:41:18.092: INFO: Waiting up to 5m0s for pod "var-expansion-d03866af-57e8-442a-aedc-fde56ff004f4" in namespace "var-expansion-2602" to be "success or failure"
Dec 23 14:41:18.127: INFO: Pod "var-expansion-d03866af-57e8-442a-aedc-fde56ff004f4": Phase="Pending", Reason="", readiness=false. Elapsed: 34.465052ms
Dec 23 14:41:20.138: INFO: Pod "var-expansion-d03866af-57e8-442a-aedc-fde56ff004f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046139738s
Dec 23 14:41:22.152: INFO: Pod "var-expansion-d03866af-57e8-442a-aedc-fde56ff004f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05924816s
Dec 23 14:41:24.176: INFO: Pod "var-expansion-d03866af-57e8-442a-aedc-fde56ff004f4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083652451s
Dec 23 14:41:26.185: INFO: Pod "var-expansion-d03866af-57e8-442a-aedc-fde56ff004f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.092553333s
STEP: Saw pod success
Dec 23 14:41:26.185: INFO: Pod "var-expansion-d03866af-57e8-442a-aedc-fde56ff004f4" satisfied condition "success or failure"
Dec 23 14:41:26.190: INFO: Trying to get logs from node iruya-node pod var-expansion-d03866af-57e8-442a-aedc-fde56ff004f4 container dapi-container: 
STEP: delete the pod
Dec 23 14:41:26.263: INFO: Waiting for pod var-expansion-d03866af-57e8-442a-aedc-fde56ff004f4 to disappear
Dec 23 14:41:26.276: INFO: Pod var-expansion-d03866af-57e8-442a-aedc-fde56ff004f4 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:41:26.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2602" for this suite.
Dec 23 14:41:32.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:41:32.511: INFO: namespace var-expansion-2602 deletion completed in 6.224155614s

• [SLOW TEST:14.503 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:41:32.512: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-lr59
STEP: Creating a pod to test atomic-volume-subpath
Dec 23 14:41:32.652: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-lr59" in namespace "subpath-7060" to be "success or failure"
Dec 23 14:41:32.677: INFO: Pod "pod-subpath-test-secret-lr59": Phase="Pending", Reason="", readiness=false. Elapsed: 24.41751ms
Dec 23 14:41:34.693: INFO: Pod "pod-subpath-test-secret-lr59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040214554s
Dec 23 14:41:36.704: INFO: Pod "pod-subpath-test-secret-lr59": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051172595s
Dec 23 14:41:38.715: INFO: Pod "pod-subpath-test-secret-lr59": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062386533s
Dec 23 14:41:40.725: INFO: Pod "pod-subpath-test-secret-lr59": Phase="Pending", Reason="", readiness=false. Elapsed: 8.072766404s
Dec 23 14:41:42.740: INFO: Pod "pod-subpath-test-secret-lr59": Phase="Pending", Reason="", readiness=false. Elapsed: 10.087069973s
Dec 23 14:41:44.754: INFO: Pod "pod-subpath-test-secret-lr59": Phase="Running", Reason="", readiness=true. Elapsed: 12.101424284s
Dec 23 14:41:46.766: INFO: Pod "pod-subpath-test-secret-lr59": Phase="Running", Reason="", readiness=true. Elapsed: 14.113829316s
Dec 23 14:41:48.778: INFO: Pod "pod-subpath-test-secret-lr59": Phase="Running", Reason="", readiness=true. Elapsed: 16.125040707s
Dec 23 14:41:50.797: INFO: Pod "pod-subpath-test-secret-lr59": Phase="Running", Reason="", readiness=true. Elapsed: 18.144006916s
Dec 23 14:41:52.804: INFO: Pod "pod-subpath-test-secret-lr59": Phase="Running", Reason="", readiness=true. Elapsed: 20.15175367s
Dec 23 14:41:54.822: INFO: Pod "pod-subpath-test-secret-lr59": Phase="Running", Reason="", readiness=true. Elapsed: 22.169148931s
Dec 23 14:41:56.829: INFO: Pod "pod-subpath-test-secret-lr59": Phase="Running", Reason="", readiness=true. Elapsed: 24.17659872s
Dec 23 14:41:58.843: INFO: Pod "pod-subpath-test-secret-lr59": Phase="Running", Reason="", readiness=true. Elapsed: 26.190263108s
Dec 23 14:42:00.857: INFO: Pod "pod-subpath-test-secret-lr59": Phase="Running", Reason="", readiness=true. Elapsed: 28.204764953s
Dec 23 14:42:02.869: INFO: Pod "pod-subpath-test-secret-lr59": Phase="Running", Reason="", readiness=true. Elapsed: 30.216519651s
Dec 23 14:42:04.881: INFO: Pod "pod-subpath-test-secret-lr59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.228531169s
STEP: Saw pod success
Dec 23 14:42:04.882: INFO: Pod "pod-subpath-test-secret-lr59" satisfied condition "success or failure"
Dec 23 14:42:04.890: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-lr59 container test-container-subpath-secret-lr59: 
STEP: delete the pod
Dec 23 14:42:04.927: INFO: Waiting for pod pod-subpath-test-secret-lr59 to disappear
Dec 23 14:42:04.932: INFO: Pod pod-subpath-test-secret-lr59 no longer exists
STEP: Deleting pod pod-subpath-test-secret-lr59
Dec 23 14:42:04.932: INFO: Deleting pod "pod-subpath-test-secret-lr59" in namespace "subpath-7060"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:42:04.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-7060" for this suite.
Dec 23 14:42:10.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:42:11.074: INFO: namespace subpath-7060 deletion completed in 6.133530163s

• [SLOW TEST:38.562 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:42:11.074: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-f73a3148-0c5b-4b75-9a6a-8f98e0661976
STEP: Creating a pod to test consume configMaps
Dec 23 14:42:11.249: INFO: Waiting up to 5m0s for pod "pod-configmaps-530450de-2972-416d-8e09-ba5f4bd54aed" in namespace "configmap-6928" to be "success or failure"
Dec 23 14:42:11.258: INFO: Pod "pod-configmaps-530450de-2972-416d-8e09-ba5f4bd54aed": Phase="Pending", Reason="", readiness=false. Elapsed: 8.438842ms
Dec 23 14:42:13.266: INFO: Pod "pod-configmaps-530450de-2972-416d-8e09-ba5f4bd54aed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016418523s
Dec 23 14:42:15.272: INFO: Pod "pod-configmaps-530450de-2972-416d-8e09-ba5f4bd54aed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023248416s
Dec 23 14:42:17.280: INFO: Pod "pod-configmaps-530450de-2972-416d-8e09-ba5f4bd54aed": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030486342s
Dec 23 14:42:19.313: INFO: Pod "pod-configmaps-530450de-2972-416d-8e09-ba5f4bd54aed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.063922735s
STEP: Saw pod success
Dec 23 14:42:19.313: INFO: Pod "pod-configmaps-530450de-2972-416d-8e09-ba5f4bd54aed" satisfied condition "success or failure"
Dec 23 14:42:19.318: INFO: Trying to get logs from node iruya-node pod pod-configmaps-530450de-2972-416d-8e09-ba5f4bd54aed container configmap-volume-test: 
STEP: delete the pod
Dec 23 14:42:19.497: INFO: Waiting for pod pod-configmaps-530450de-2972-416d-8e09-ba5f4bd54aed to disappear
Dec 23 14:42:19.509: INFO: Pod pod-configmaps-530450de-2972-416d-8e09-ba5f4bd54aed no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:42:19.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6928" for this suite.
Dec 23 14:42:25.617: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:42:25.745: INFO: namespace configmap-6928 deletion completed in 6.159739963s

• [SLOW TEST:14.671 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:42:25.746: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-61d5de90-9b6c-43e4-813b-345e77b9900d
STEP: Creating secret with name secret-projected-all-test-volume-40ebbee6-b5e0-42e0-a354-eaa7734df7fb
STEP: Creating a pod to test Check all projections for projected volume plugin
Dec 23 14:42:25.863: INFO: Waiting up to 5m0s for pod "projected-volume-7a22b219-e672-4e6b-9c57-0581f2dfbc88" in namespace "projected-2480" to be "success or failure"
Dec 23 14:42:25.872: INFO: Pod "projected-volume-7a22b219-e672-4e6b-9c57-0581f2dfbc88": Phase="Pending", Reason="", readiness=false. Elapsed: 9.061311ms
Dec 23 14:42:27.916: INFO: Pod "projected-volume-7a22b219-e672-4e6b-9c57-0581f2dfbc88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053191191s
Dec 23 14:42:29.927: INFO: Pod "projected-volume-7a22b219-e672-4e6b-9c57-0581f2dfbc88": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063537734s
Dec 23 14:42:31.939: INFO: Pod "projected-volume-7a22b219-e672-4e6b-9c57-0581f2dfbc88": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07530657s
Dec 23 14:42:33.963: INFO: Pod "projected-volume-7a22b219-e672-4e6b-9c57-0581f2dfbc88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.099941254s
STEP: Saw pod success
Dec 23 14:42:33.964: INFO: Pod "projected-volume-7a22b219-e672-4e6b-9c57-0581f2dfbc88" satisfied condition "success or failure"
Dec 23 14:42:33.973: INFO: Trying to get logs from node iruya-node pod projected-volume-7a22b219-e672-4e6b-9c57-0581f2dfbc88 container projected-all-volume-test: 
STEP: delete the pod
Dec 23 14:42:34.687: INFO: Waiting for pod projected-volume-7a22b219-e672-4e6b-9c57-0581f2dfbc88 to disappear
Dec 23 14:42:34.903: INFO: Pod projected-volume-7a22b219-e672-4e6b-9c57-0581f2dfbc88 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:42:34.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2480" for this suite.
Dec 23 14:42:40.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:42:41.042: INFO: namespace projected-2480 deletion completed in 6.131816483s

• [SLOW TEST:15.297 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:42:41.043: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-61a23fb1-4cf5-4d1b-b2b5-b55cf0b60446
STEP: Creating a pod to test consume configMaps
Dec 23 14:42:41.370: INFO: Waiting up to 5m0s for pod "pod-configmaps-eccd273b-11ed-49eb-b07c-0e7217f7e1aa" in namespace "configmap-2377" to be "success or failure"
Dec 23 14:42:41.390: INFO: Pod "pod-configmaps-eccd273b-11ed-49eb-b07c-0e7217f7e1aa": Phase="Pending", Reason="", readiness=false. Elapsed: 19.596497ms
Dec 23 14:42:43.401: INFO: Pod "pod-configmaps-eccd273b-11ed-49eb-b07c-0e7217f7e1aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030392262s
Dec 23 14:42:45.426: INFO: Pod "pod-configmaps-eccd273b-11ed-49eb-b07c-0e7217f7e1aa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054988079s
Dec 23 14:42:47.439: INFO: Pod "pod-configmaps-eccd273b-11ed-49eb-b07c-0e7217f7e1aa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068197453s
Dec 23 14:42:49.450: INFO: Pod "pod-configmaps-eccd273b-11ed-49eb-b07c-0e7217f7e1aa": Phase="Running", Reason="", readiness=true. Elapsed: 8.079880647s
Dec 23 14:42:51.462: INFO: Pod "pod-configmaps-eccd273b-11ed-49eb-b07c-0e7217f7e1aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.091111004s
STEP: Saw pod success
Dec 23 14:42:51.462: INFO: Pod "pod-configmaps-eccd273b-11ed-49eb-b07c-0e7217f7e1aa" satisfied condition "success or failure"
Dec 23 14:42:51.468: INFO: Trying to get logs from node iruya-node pod pod-configmaps-eccd273b-11ed-49eb-b07c-0e7217f7e1aa container configmap-volume-test: 
STEP: delete the pod
Dec 23 14:42:51.522: INFO: Waiting for pod pod-configmaps-eccd273b-11ed-49eb-b07c-0e7217f7e1aa to disappear
Dec 23 14:42:51.530: INFO: Pod pod-configmaps-eccd273b-11ed-49eb-b07c-0e7217f7e1aa no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:42:51.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2377" for this suite.
Dec 23 14:42:57.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:42:57.756: INFO: namespace configmap-2377 deletion completed in 6.219411266s

• [SLOW TEST:16.713 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:42:57.757: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-0160a261-5255-460c-81b4-3b77fb8187c8
STEP: Creating a pod to test consume configMaps
Dec 23 14:42:57.901: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7525a7bc-f72e-4b86-a834-e98da6d2fefb" in namespace "projected-3401" to be "success or failure"
Dec 23 14:42:57.907: INFO: Pod "pod-projected-configmaps-7525a7bc-f72e-4b86-a834-e98da6d2fefb": Phase="Pending", Reason="", readiness=false. Elapsed: 5.724999ms
Dec 23 14:42:59.916: INFO: Pod "pod-projected-configmaps-7525a7bc-f72e-4b86-a834-e98da6d2fefb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014623861s
Dec 23 14:43:01.927: INFO: Pod "pod-projected-configmaps-7525a7bc-f72e-4b86-a834-e98da6d2fefb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02613444s
Dec 23 14:43:03.945: INFO: Pod "pod-projected-configmaps-7525a7bc-f72e-4b86-a834-e98da6d2fefb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043734497s
Dec 23 14:43:05.959: INFO: Pod "pod-projected-configmaps-7525a7bc-f72e-4b86-a834-e98da6d2fefb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057907687s
STEP: Saw pod success
Dec 23 14:43:05.960: INFO: Pod "pod-projected-configmaps-7525a7bc-f72e-4b86-a834-e98da6d2fefb" satisfied condition "success or failure"
Dec 23 14:43:05.966: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-7525a7bc-f72e-4b86-a834-e98da6d2fefb container projected-configmap-volume-test: 
STEP: delete the pod
Dec 23 14:43:06.013: INFO: Waiting for pod pod-projected-configmaps-7525a7bc-f72e-4b86-a834-e98da6d2fefb to disappear
Dec 23 14:43:06.039: INFO: Pod pod-projected-configmaps-7525a7bc-f72e-4b86-a834-e98da6d2fefb no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:43:06.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3401" for this suite.
Dec 23 14:43:12.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:43:12.269: INFO: namespace projected-3401 deletion completed in 6.211054352s

• [SLOW TEST:14.512 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:43:12.269: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-0e2e2c1d-0a8a-4fe2-b80a-ba9c67bc0466
STEP: Creating secret with name s-test-opt-upd-52a43c89-bcef-4dd2-b08a-48e177111e22
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-0e2e2c1d-0a8a-4fe2-b80a-ba9c67bc0466
STEP: Updating secret s-test-opt-upd-52a43c89-bcef-4dd2-b08a-48e177111e22
STEP: Creating secret with name s-test-opt-create-d5fbddd0-7ec1-4f48-9545-e5904c4461dd
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:43:28.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2032" for this suite.
Dec 23 14:43:50.888: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:43:50.991: INFO: namespace secrets-2032 deletion completed in 22.150473842s

• [SLOW TEST:38.722 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:43:50.992: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-732
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 23 14:43:51.053: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 23 14:44:27.289: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-732 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 14:44:27.289: INFO: >>> kubeConfig: /root/.kube/config
Dec 23 14:44:27.785: INFO: Waiting for endpoints: map[]
Dec 23 14:44:27.797: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-732 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 14:44:27.797: INFO: >>> kubeConfig: /root/.kube/config
Dec 23 14:44:28.175: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:44:28.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-732" for this suite.
Dec 23 14:44:52.211: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:44:52.337: INFO: namespace pod-network-test-732 deletion completed in 24.148181138s

• [SLOW TEST:61.345 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:44:52.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 23 14:44:52.611: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"37ba48fe-374e-4070-91fd-a9fb4fcd4a48", Controller:(*bool)(0xc002eb5c82), BlockOwnerDeletion:(*bool)(0xc002eb5c83)}}
Dec 23 14:44:52.686: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"5c059d69-b621-4789-901d-4d7a49c688a2", Controller:(*bool)(0xc002eb5e3a), BlockOwnerDeletion:(*bool)(0xc002eb5e3b)}}
Dec 23 14:44:52.749: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"9c83682e-a729-4653-bdcc-1dfac11e558e", Controller:(*bool)(0xc002661662), BlockOwnerDeletion:(*bool)(0xc002661663)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:44:57.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6886" for this suite.
Dec 23 14:45:03.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:45:04.033: INFO: namespace gc-6886 deletion completed in 6.206387163s

• [SLOW TEST:11.695 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:45:04.034: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W1223 14:45:07.877037       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 23 14:45:07.877: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:45:07.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2093" for this suite.
Dec 23 14:45:13.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:45:14.140: INFO: namespace gc-2093 deletion completed in 6.232837415s

• [SLOW TEST:10.107 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:45:14.141: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Dec 23 14:45:14.333: INFO: Waiting up to 5m0s for pod "pod-c25955ea-814d-4903-9c5c-10107034b771" in namespace "emptydir-1701" to be "success or failure"
Dec 23 14:45:14.340: INFO: Pod "pod-c25955ea-814d-4903-9c5c-10107034b771": Phase="Pending", Reason="", readiness=false. Elapsed: 6.28363ms
Dec 23 14:45:16.349: INFO: Pod "pod-c25955ea-814d-4903-9c5c-10107034b771": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015508082s
Dec 23 14:45:18.366: INFO: Pod "pod-c25955ea-814d-4903-9c5c-10107034b771": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032395293s
Dec 23 14:45:20.378: INFO: Pod "pod-c25955ea-814d-4903-9c5c-10107034b771": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044775007s
Dec 23 14:45:22.388: INFO: Pod "pod-c25955ea-814d-4903-9c5c-10107034b771": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.054840749s
STEP: Saw pod success
Dec 23 14:45:22.389: INFO: Pod "pod-c25955ea-814d-4903-9c5c-10107034b771" satisfied condition "success or failure"
Dec 23 14:45:22.391: INFO: Trying to get logs from node iruya-node pod pod-c25955ea-814d-4903-9c5c-10107034b771 container test-container: 
STEP: delete the pod
Dec 23 14:45:22.490: INFO: Waiting for pod pod-c25955ea-814d-4903-9c5c-10107034b771 to disappear
Dec 23 14:45:22.497: INFO: Pod pod-c25955ea-814d-4903-9c5c-10107034b771 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:45:22.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1701" for this suite.
Dec 23 14:45:28.539: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:45:28.696: INFO: namespace emptydir-1701 deletion completed in 6.192661763s

• [SLOW TEST:14.555 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:45:28.698: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Dec 23 14:45:28.885: INFO: Waiting up to 5m0s for pod "downward-api-3532f190-f562-4260-a378-b8829adde29d" in namespace "downward-api-7521" to be "success or failure"
Dec 23 14:45:28.913: INFO: Pod "downward-api-3532f190-f562-4260-a378-b8829adde29d": Phase="Pending", Reason="", readiness=false. Elapsed: 27.480829ms
Dec 23 14:45:30.925: INFO: Pod "downward-api-3532f190-f562-4260-a378-b8829adde29d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039844756s
Dec 23 14:45:32.943: INFO: Pod "downward-api-3532f190-f562-4260-a378-b8829adde29d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058171181s
Dec 23 14:45:34.957: INFO: Pod "downward-api-3532f190-f562-4260-a378-b8829adde29d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071521299s
Dec 23 14:45:36.976: INFO: Pod "downward-api-3532f190-f562-4260-a378-b8829adde29d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.090666048s
STEP: Saw pod success
Dec 23 14:45:36.976: INFO: Pod "downward-api-3532f190-f562-4260-a378-b8829adde29d" satisfied condition "success or failure"
Dec 23 14:45:36.987: INFO: Trying to get logs from node iruya-node pod downward-api-3532f190-f562-4260-a378-b8829adde29d container dapi-container: 
STEP: delete the pod
Dec 23 14:45:37.171: INFO: Waiting for pod downward-api-3532f190-f562-4260-a378-b8829adde29d to disappear
Dec 23 14:45:37.208: INFO: Pod downward-api-3532f190-f562-4260-a378-b8829adde29d no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:45:37.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7521" for this suite.
Dec 23 14:45:45.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:45:45.476: INFO: namespace downward-api-7521 deletion completed in 8.259596319s

• [SLOW TEST:16.778 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:45:45.476: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-414c1a81-55d2-4865-890d-e0ff2eb32e55 in namespace container-probe-8455
Dec 23 14:45:53.736: INFO: Started pod liveness-414c1a81-55d2-4865-890d-e0ff2eb32e55 in namespace container-probe-8455
STEP: checking the pod's current state and verifying that restartCount is present
Dec 23 14:45:53.742: INFO: Initial restart count of pod liveness-414c1a81-55d2-4865-890d-e0ff2eb32e55 is 0
Dec 23 14:46:11.931: INFO: Restart count of pod container-probe-8455/liveness-414c1a81-55d2-4865-890d-e0ff2eb32e55 is now 1 (18.188511725s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:46:11.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8455" for this suite.
Dec 23 14:46:18.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:46:18.169: INFO: namespace container-probe-8455 deletion completed in 6.177541087s

• [SLOW TEST:32.693 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:46:18.170: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 23 14:46:18.346: INFO: Waiting up to 5m0s for pod "pod-6ab6ab45-fb8e-460b-8f81-78ffd2e34757" in namespace "emptydir-254" to be "success or failure"
Dec 23 14:46:18.366: INFO: Pod "pod-6ab6ab45-fb8e-460b-8f81-78ffd2e34757": Phase="Pending", Reason="", readiness=false. Elapsed: 20.12912ms
Dec 23 14:46:20.374: INFO: Pod "pod-6ab6ab45-fb8e-460b-8f81-78ffd2e34757": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028583186s
Dec 23 14:46:22.388: INFO: Pod "pod-6ab6ab45-fb8e-460b-8f81-78ffd2e34757": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041890582s
Dec 23 14:46:24.395: INFO: Pod "pod-6ab6ab45-fb8e-460b-8f81-78ffd2e34757": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049595619s
Dec 23 14:46:26.419: INFO: Pod "pod-6ab6ab45-fb8e-460b-8f81-78ffd2e34757": Phase="Pending", Reason="", readiness=false. Elapsed: 8.073502144s
Dec 23 14:46:28.430: INFO: Pod "pod-6ab6ab45-fb8e-460b-8f81-78ffd2e34757": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.08444876s
STEP: Saw pod success
Dec 23 14:46:28.431: INFO: Pod "pod-6ab6ab45-fb8e-460b-8f81-78ffd2e34757" satisfied condition "success or failure"
Dec 23 14:46:28.437: INFO: Trying to get logs from node iruya-node pod pod-6ab6ab45-fb8e-460b-8f81-78ffd2e34757 container test-container: 
STEP: delete the pod
Dec 23 14:46:28.879: INFO: Waiting for pod pod-6ab6ab45-fb8e-460b-8f81-78ffd2e34757 to disappear
Dec 23 14:46:28.898: INFO: Pod pod-6ab6ab45-fb8e-460b-8f81-78ffd2e34757 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:46:28.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-254" for this suite.
Dec 23 14:46:34.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:46:35.077: INFO: namespace emptydir-254 deletion completed in 6.172657116s

• [SLOW TEST:16.907 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:46:35.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 23 14:46:35.271: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:46:45.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1464" for this suite.
Dec 23 14:47:47.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:47:47.523: INFO: namespace pods-1464 deletion completed in 1m2.167553709s

• [SLOW TEST:72.446 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:47:47.523: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 23 14:48:05.773: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 23 14:48:05.795: INFO: Pod pod-with-prestop-http-hook still exists
Dec 23 14:48:07.796: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 23 14:48:07.836: INFO: Pod pod-with-prestop-http-hook still exists
Dec 23 14:48:09.796: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 23 14:48:09.810: INFO: Pod pod-with-prestop-http-hook still exists
Dec 23 14:48:11.796: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 23 14:48:11.849: INFO: Pod pod-with-prestop-http-hook still exists
Dec 23 14:48:13.796: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 23 14:48:13.824: INFO: Pod pod-with-prestop-http-hook still exists
Dec 23 14:48:15.796: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 23 14:48:15.808: INFO: Pod pod-with-prestop-http-hook still exists
Dec 23 14:48:17.796: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 23 14:48:17.806: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:48:17.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-6567" for this suite.
Dec 23 14:48:39.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:48:40.112: INFO: namespace container-lifecycle-hook-6567 deletion completed in 22.186926436s

• [SLOW TEST:52.589 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:48:40.113: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 23 14:48:40.217: INFO: Creating deployment "nginx-deployment"
Dec 23 14:48:40.238: INFO: Waiting for observed generation 1
Dec 23 14:48:42.536: INFO: Waiting for all required pods to come up
Dec 23 14:48:42.594: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Dec 23 14:49:09.369: INFO: Waiting for deployment "nginx-deployment" to complete
Dec 23 14:49:09.383: INFO: Updating deployment "nginx-deployment" with a non-existent image
Dec 23 14:49:09.403: INFO: Updating deployment nginx-deployment
Dec 23 14:49:09.403: INFO: Waiting for observed generation 2
Dec 23 14:49:12.353: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Dec 23 14:49:12.980: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Dec 23 14:49:13.064: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Dec 23 14:49:13.169: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Dec 23 14:49:13.169: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Dec 23 14:49:13.172: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Dec 23 14:49:13.178: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Dec 23 14:49:13.178: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Dec 23 14:49:13.185: INFO: Updating deployment nginx-deployment
Dec 23 14:49:13.186: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Dec 23 14:49:14.127: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Dec 23 14:49:14.171: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 23 14:49:14.945: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-1116,SelfLink:/apis/apps/v1/namespaces/deployment-1116/deployments/nginx-deployment,UID:126019ea-c6e8-4361-b950-5deec3d214bc,ResourceVersion:17778553,Generation:3,CreationTimestamp:2019-12-23 14:48:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2019-12-23 14:49:09 +0000 UTC 2019-12-23 14:48:40 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2019-12-23 14:49:14 +0000 UTC 2019-12-23 14:49:14 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Dec 23 14:49:15.225: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-1116,SelfLink:/apis/apps/v1/namespaces/deployment-1116/replicasets/nginx-deployment-55fb7cb77f,UID:5be5b41a-e440-4631-9d7a-cb6d0f12da75,ResourceVersion:17778549,Generation:3,CreationTimestamp:2019-12-23 14:49:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 126019ea-c6e8-4361-b950-5deec3d214bc 0xc001d45937 0xc001d45938}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 23 14:49:15.225: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Dec 23 14:49:15.225: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-1116,SelfLink:/apis/apps/v1/namespaces/deployment-1116/replicasets/nginx-deployment-7b8c6f4498,UID:946fe83b-6745-49f3-96c0-381f88099516,ResourceVersion:17778548,Generation:3,CreationTimestamp:2019-12-23 14:48:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 126019ea-c6e8-4361-b950-5deec3d214bc 0xc001d45b97 0xc001d45b98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Dec 23 14:49:15.309: INFO: Pod "nginx-deployment-55fb7cb77f-44w7b" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-44w7b,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1116,SelfLink:/api/v1/namespaces/deployment-1116/pods/nginx-deployment-55fb7cb77f-44w7b,UID:870a5a0f-456b-43e0-a266-f361b5bebad9,ResourceVersion:17778576,Generation:0,CreationTimestamp:2019-12-23 14:49:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5be5b41a-e440-4631-9d7a-cb6d0f12da75 0xc000ced1c7 0xc000ced1c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zqbpm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zqbpm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zqbpm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000ced240} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000ced260}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:14 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 23 14:49:15.310: INFO: Pod "nginx-deployment-55fb7cb77f-4mgn6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-4mgn6,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1116,SelfLink:/api/v1/namespaces/deployment-1116/pods/nginx-deployment-55fb7cb77f-4mgn6,UID:acda30fd-6a1f-4926-8046-f4ba83f2ca48,ResourceVersion:17778538,Generation:0,CreationTimestamp:2019-12-23 14:49:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5be5b41a-e440-4631-9d7a-cb6d0f12da75 0xc000ced2e7 0xc000ced2e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zqbpm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zqbpm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zqbpm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000ced360} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000ced380}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-23 14:49:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 23 14:49:15.310: INFO: Pod "nginx-deployment-55fb7cb77f-6gmzw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6gmzw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1116,SelfLink:/api/v1/namespaces/deployment-1116/pods/nginx-deployment-55fb7cb77f-6gmzw,UID:3aec14b0-2d3d-4e16-b257-24e52e16717b,ResourceVersion:17778590,Generation:0,CreationTimestamp:2019-12-23 14:49:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5be5b41a-e440-4631-9d7a-cb6d0f12da75 0xc000ced477 0xc000ced478}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zqbpm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zqbpm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zqbpm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000ced4f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000ced510}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:14 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 23 14:49:15.310: INFO: Pod "nginx-deployment-55fb7cb77f-7wbxd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7wbxd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1116,SelfLink:/api/v1/namespaces/deployment-1116/pods/nginx-deployment-55fb7cb77f-7wbxd,UID:1e02a9ce-7e3b-4f06-95f7-798b77ad4d47,ResourceVersion:17778540,Generation:0,CreationTimestamp:2019-12-23 14:49:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5be5b41a-e440-4631-9d7a-cb6d0f12da75 0xc000ced597 0xc000ced598}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zqbpm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zqbpm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zqbpm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000ced600} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000ced620}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-23 14:49:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 23 14:49:15.311: INFO: Pod "nginx-deployment-55fb7cb77f-8sbq9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8sbq9,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1116,SelfLink:/api/v1/namespaces/deployment-1116/pods/nginx-deployment-55fb7cb77f-8sbq9,UID:589e5bf7-7a32-4359-b8cd-52500815c4da,ResourceVersion:17778586,Generation:0,CreationTimestamp:2019-12-23 14:49:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5be5b41a-e440-4631-9d7a-cb6d0f12da75 0xc000ced6f7 0xc000ced6f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zqbpm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zqbpm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zqbpm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000ced760} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000ced780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 23 14:49:15.311: INFO: Pod "nginx-deployment-55fb7cb77f-gz68j" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-gz68j,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1116,SelfLink:/api/v1/namespaces/deployment-1116/pods/nginx-deployment-55fb7cb77f-gz68j,UID:b0d3a0f9-735f-46fe-884a-75a0c527d601,ResourceVersion:17778575,Generation:0,CreationTimestamp:2019-12-23 14:49:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5be5b41a-e440-4631-9d7a-cb6d0f12da75 0xc000ced800 0xc000ced801}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zqbpm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zqbpm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zqbpm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000ced880} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000ced8a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:14 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 23 14:49:15.311: INFO: Pod "nginx-deployment-55fb7cb77f-js9wj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-js9wj,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1116,SelfLink:/api/v1/namespaces/deployment-1116/pods/nginx-deployment-55fb7cb77f-js9wj,UID:6fb14040-5efc-47d5-b856-ae9ec0b5e368,ResourceVersion:17778591,Generation:0,CreationTimestamp:2019-12-23 14:49:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5be5b41a-e440-4631-9d7a-cb6d0f12da75 0xc000ced927 0xc000ced928}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zqbpm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zqbpm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zqbpm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000ced9a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000ced9c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:14 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 23 14:49:15.312: INFO: Pod "nginx-deployment-55fb7cb77f-mq87l" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-mq87l,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1116,SelfLink:/api/v1/namespaces/deployment-1116/pods/nginx-deployment-55fb7cb77f-mq87l,UID:84ba6f5f-ae90-4725-b9ef-c1c899702fda,ResourceVersion:17778526,Generation:0,CreationTimestamp:2019-12-23 14:49:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5be5b41a-e440-4631-9d7a-cb6d0f12da75 0xc000ceda47 0xc000ceda48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zqbpm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zqbpm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zqbpm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000cedac0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000cedae0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-23 14:49:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 23 14:49:15.312: INFO: Pod "nginx-deployment-55fb7cb77f-ns6lj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-ns6lj,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1116,SelfLink:/api/v1/namespaces/deployment-1116/pods/nginx-deployment-55fb7cb77f-ns6lj,UID:8aa865ec-e32b-4796-960e-77d4f9d52377,ResourceVersion:17778561,Generation:0,CreationTimestamp:2019-12-23 14:49:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5be5b41a-e440-4631-9d7a-cb6d0f12da75 0xc000cedbb7 0xc000cedbb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zqbpm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zqbpm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zqbpm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000cedc20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000cedc40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:14 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 23 14:49:15.312: INFO: Pod "nginx-deployment-55fb7cb77f-pqq8t" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-pqq8t,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1116,SelfLink:/api/v1/namespaces/deployment-1116/pods/nginx-deployment-55fb7cb77f-pqq8t,UID:720938ce-5e94-43e5-8e80-a6d91cec65af,ResourceVersion:17778580,Generation:0,CreationTimestamp:2019-12-23 14:49:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5be5b41a-e440-4631-9d7a-cb6d0f12da75 0xc000cedcc7 0xc000cedcc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zqbpm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zqbpm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zqbpm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000cedd30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000cedd50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:14 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 23 14:49:15.312: INFO: Pod "nginx-deployment-55fb7cb77f-r4bwk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-r4bwk,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1116,SelfLink:/api/v1/namespaces/deployment-1116/pods/nginx-deployment-55fb7cb77f-r4bwk,UID:a60dbd4e-3359-4615-8157-74e1adcacc6d,ResourceVersion:17778579,Generation:0,CreationTimestamp:2019-12-23 14:49:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5be5b41a-e440-4631-9d7a-cb6d0f12da75 0xc000ceddf7 0xc000ceddf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zqbpm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zqbpm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zqbpm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000cede90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000cedeb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:14 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 23 14:49:15.313: INFO: Pod "nginx-deployment-55fb7cb77f-rpvv9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-rpvv9,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1116,SelfLink:/api/v1/namespaces/deployment-1116/pods/nginx-deployment-55fb7cb77f-rpvv9,UID:05a8600a-68be-4619-b469-aad3f34bbbf0,ResourceVersion:17778514,Generation:0,CreationTimestamp:2019-12-23 14:49:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5be5b41a-e440-4631-9d7a-cb6d0f12da75 0xc000cedf47 0xc000cedf48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zqbpm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zqbpm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zqbpm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000cedfb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000cedfd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-23 14:49:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 23 14:49:15.313: INFO: Pod "nginx-deployment-55fb7cb77f-zmhq5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-zmhq5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1116,SelfLink:/api/v1/namespaces/deployment-1116/pods/nginx-deployment-55fb7cb77f-zmhq5,UID:5af00481-616d-4661-8c5f-089fa51ca454,ResourceVersion:17778515,Generation:0,CreationTimestamp:2019-12-23 14:49:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5be5b41a-e440-4631-9d7a-cb6d0f12da75 0xc001f300c7 0xc001f300c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zqbpm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zqbpm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zqbpm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f30150} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f30170}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-23 14:49:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 23 14:49:15.314: INFO: Pod "nginx-deployment-7b8c6f4498-2slf5" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2slf5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1116,SelfLink:/api/v1/namespaces/deployment-1116/pods/nginx-deployment-7b8c6f4498-2slf5,UID:5fffd52e-3e81-43d2-a5a3-fb301888913d,ResourceVersion:17778464,Generation:0,CreationTimestamp:2019-12-23 14:48:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 946fe83b-6745-49f3-96c0-381f88099516 0xc001f30247 0xc001f30248}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zqbpm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zqbpm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zqbpm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f302d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f302f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:48:40 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:06 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:48:40 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.4,StartTime:2019-12-23 14:48:40 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-23 14:49:05 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://a08fc43870000b82723b396c38d6c6ec73fb8ef2c6f6b63edb73aec23940f302}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 23 14:49:15.315: INFO: Pod "nginx-deployment-7b8c6f4498-59ltr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-59ltr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1116,SelfLink:/api/v1/namespaces/deployment-1116/pods/nginx-deployment-7b8c6f4498-59ltr,UID:3c21607a-1f1c-4716-99ee-585d68b1d39b,ResourceVersion:17778587,Generation:0,CreationTimestamp:2019-12-23 14:49:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 946fe83b-6745-49f3-96c0-381f88099516 0xc001f303d7 0xc001f303d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zqbpm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zqbpm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zqbpm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f30490} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f304b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:14 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 23 14:49:15.315: INFO: Pod "nginx-deployment-7b8c6f4498-5j8vt" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5j8vt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1116,SelfLink:/api/v1/namespaces/deployment-1116/pods/nginx-deployment-7b8c6f4498-5j8vt,UID:6a79b053-20cf-4c85-a6f1-a6a1602def33,ResourceVersion:17778482,Generation:0,CreationTimestamp:2019-12-23 14:48:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 946fe83b-6745-49f3-96c0-381f88099516 0xc001f30537 0xc001f30538}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zqbpm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zqbpm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zqbpm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f305b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f305d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:48:40 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:08 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:08 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:48:40 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.5,StartTime:2019-12-23 14:48:40 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-23 14:49:06 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://d1c5c6ddc631660b9f07957b8e0e2cb63c595032ba17db46ceb7bece85eda324}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 23 14:49:15.315: INFO: Pod "nginx-deployment-7b8c6f4498-5qlks" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5qlks,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1116,SelfLink:/api/v1/namespaces/deployment-1116/pods/nginx-deployment-7b8c6f4498-5qlks,UID:bf99537d-3be3-4b65-89a9-1a27d443cfec,ResourceVersion:17778459,Generation:0,CreationTimestamp:2019-12-23 14:48:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 946fe83b-6745-49f3-96c0-381f88099516 0xc001f306a7 0xc001f306a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zqbpm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zqbpm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zqbpm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f30750} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f30780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:48:40 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:06 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:48:40 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2019-12-23 14:48:40 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-23 14:49:04 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://c433944a15e5e4a688392270427f3ebd3c7cadc5d221fa7b6743cef456ce85c5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 23 14:49:15.316: INFO: Pod "nginx-deployment-7b8c6f4498-8bmzn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8bmzn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1116,SelfLink:/api/v1/namespaces/deployment-1116/pods/nginx-deployment-7b8c6f4498-8bmzn,UID:6371e87d-8812-4254-9f6e-d661579a6c2b,ResourceVersion:17778573,Generation:0,CreationTimestamp:2019-12-23 14:49:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 946fe83b-6745-49f3-96c0-381f88099516 0xc001f30957 0xc001f30958}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zqbpm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zqbpm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zqbpm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f30b80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f30c50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:14 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 23 14:49:15.316: INFO: Pod "nginx-deployment-7b8c6f4498-8zfdg" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8zfdg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1116,SelfLink:/api/v1/namespaces/deployment-1116/pods/nginx-deployment-7b8c6f4498-8zfdg,UID:5fe16566-7b41-474a-9fc3-c943055f9a9d,ResourceVersion:17778484,Generation:0,CreationTimestamp:2019-12-23 14:48:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 946fe83b-6745-49f3-96c0-381f88099516 0xc001f30e07 0xc001f30e08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zqbpm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zqbpm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zqbpm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f30e80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f30eb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:48:40 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:08 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:08 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:48:40 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2019-12-23 14:48:40 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-23 14:49:06 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://2d2b3287b618879ac0dc5586fd95e887ac64af3d28e7183f700f1a0421d5a6c5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 23 14:49:15.316: INFO: Pod "nginx-deployment-7b8c6f4498-9lkxz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9lkxz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1116,SelfLink:/api/v1/namespaces/deployment-1116/pods/nginx-deployment-7b8c6f4498-9lkxz,UID:bb79aeeb-8313-4fed-a786-2baf33a16f9e,ResourceVersion:17778588,Generation:0,CreationTimestamp:2019-12-23 14:49:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 946fe83b-6745-49f3-96c0-381f88099516 0xc001f30f87 0xc001f30f88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zqbpm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zqbpm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zqbpm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f310e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f31100}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:14 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 23 14:49:15.317: INFO: Pod "nginx-deployment-7b8c6f4498-b4qbf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-b4qbf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1116,SelfLink:/api/v1/namespaces/deployment-1116/pods/nginx-deployment-7b8c6f4498-b4qbf,UID:68e5282f-24d7-4b0a-a09d-5a53f54f7a06,ResourceVersion:17778581,Generation:0,CreationTimestamp:2019-12-23 14:49:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 946fe83b-6745-49f3-96c0-381f88099516 0xc001f31187 0xc001f31188}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zqbpm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zqbpm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zqbpm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f311f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f31210}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 23 14:49:15.317: INFO: Pod "nginx-deployment-7b8c6f4498-bhn6t" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bhn6t,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1116,SelfLink:/api/v1/namespaces/deployment-1116/pods/nginx-deployment-7b8c6f4498-bhn6t,UID:8d0218ac-c345-49fa-ac34-4b79357320f6,ResourceVersion:17778584,Generation:0,CreationTimestamp:2019-12-23 14:49:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 946fe83b-6745-49f3-96c0-381f88099516 0xc001f31370 0xc001f31371}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zqbpm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zqbpm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zqbpm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f31470} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f31490}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 23 14:49:15.317: INFO: Pod "nginx-deployment-7b8c6f4498-l9fmn" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-l9fmn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1116,SelfLink:/api/v1/namespaces/deployment-1116/pods/nginx-deployment-7b8c6f4498-l9fmn,UID:d9350289-3cd6-4877-9607-230ec6182f30,ResourceVersion:17778468,Generation:0,CreationTimestamp:2019-12-23 14:48:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 946fe83b-6745-49f3-96c0-381f88099516 0xc001f31510 0xc001f31511}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zqbpm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zqbpm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zqbpm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f316b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f316d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:48:40 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:06 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:48:40 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2019-12-23 14:48:40 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-23 14:49:04 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://6b3acf8c0b77774849b55bc0bcba205e0a48434dfa2a2721bb8e61e9c192623e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 23 14:49:15.318: INFO: Pod "nginx-deployment-7b8c6f4498-nznq4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nznq4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1116,SelfLink:/api/v1/namespaces/deployment-1116/pods/nginx-deployment-7b8c6f4498-nznq4,UID:7c215a48-3420-45f2-a586-0a571b95569c,ResourceVersion:17778574,Generation:0,CreationTimestamp:2019-12-23 14:49:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 946fe83b-6745-49f3-96c0-381f88099516 0xc001f31907 0xc001f31908}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zqbpm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zqbpm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zqbpm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f319b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f31a40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:14 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 23 14:49:15.318: INFO: Pod "nginx-deployment-7b8c6f4498-qcdns" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qcdns,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1116,SelfLink:/api/v1/namespaces/deployment-1116/pods/nginx-deployment-7b8c6f4498-qcdns,UID:4a5091df-f87e-4f73-84d2-b13aab617150,ResourceVersion:17778449,Generation:0,CreationTimestamp:2019-12-23 14:48:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 946fe83b-6745-49f3-96c0-381f88099516 0xc001f31b27 0xc001f31b28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zqbpm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zqbpm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zqbpm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f31ba0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f31bc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:48:40 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:06 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:48:40 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.5,StartTime:2019-12-23 14:48:40 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-23 14:49:05 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://ed2ce54b7dbe608a396cbbf1ee6a69dc21645dc196af4be003bfa6953ef1b794}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 23 14:49:15.329: INFO: Pod "nginx-deployment-7b8c6f4498-qffbb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qffbb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1116,SelfLink:/api/v1/namespaces/deployment-1116/pods/nginx-deployment-7b8c6f4498-qffbb,UID:984d14bf-4c26-4d3d-92f7-41e333572216,ResourceVersion:17778585,Generation:0,CreationTimestamp:2019-12-23 14:49:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 946fe83b-6745-49f3-96c0-381f88099516 0xc001f31ca7 0xc001f31ca8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zqbpm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zqbpm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zqbpm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f31d10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f31d30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 23 14:49:15.330: INFO: Pod "nginx-deployment-7b8c6f4498-qg7qh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qg7qh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1116,SelfLink:/api/v1/namespaces/deployment-1116/pods/nginx-deployment-7b8c6f4498-qg7qh,UID:6e959ae7-2e53-46f5-bf2a-d1bd4034db72,ResourceVersion:17778589,Generation:0,CreationTimestamp:2019-12-23 14:49:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 946fe83b-6745-49f3-96c0-381f88099516 0xc001f31da0 0xc001f31da1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zqbpm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zqbpm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zqbpm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f31e10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f31e30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:14 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 23 14:49:15.331: INFO: Pod "nginx-deployment-7b8c6f4498-qw446" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qw446,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1116,SelfLink:/api/v1/namespaces/deployment-1116/pods/nginx-deployment-7b8c6f4498-qw446,UID:b50fb5de-dd25-4ce6-92bb-d7e1ef383f85,ResourceVersion:17778582,Generation:0,CreationTimestamp:2019-12-23 14:49:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 946fe83b-6745-49f3-96c0-381f88099516 0xc001f31eb7 0xc001f31eb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zqbpm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zqbpm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zqbpm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f31f20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f31f40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 23 14:49:15.331: INFO: Pod "nginx-deployment-7b8c6f4498-qwm2h" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qwm2h,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1116,SelfLink:/api/v1/namespaces/deployment-1116/pods/nginx-deployment-7b8c6f4498-qwm2h,UID:d61a2d49-0ac8-48ba-b7a1-189d369528e7,ResourceVersion:17778583,Generation:0,CreationTimestamp:2019-12-23 14:49:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 946fe83b-6745-49f3-96c0-381f88099516 0xc001f31fb0 0xc001f31fb1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zqbpm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zqbpm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zqbpm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d40010} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d40030}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 23 14:49:15.332: INFO: Pod "nginx-deployment-7b8c6f4498-rv9ss" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rv9ss,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1116,SelfLink:/api/v1/namespaces/deployment-1116/pods/nginx-deployment-7b8c6f4498-rv9ss,UID:dca8895b-bf70-4e45-b685-c65b6ac55c63,ResourceVersion:17778560,Generation:0,CreationTimestamp:2019-12-23 14:49:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 946fe83b-6745-49f3-96c0-381f88099516 0xc002d400a0 0xc002d400a1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zqbpm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zqbpm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zqbpm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d40100} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d40120}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:14 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 23 14:49:15.332: INFO: Pod "nginx-deployment-7b8c6f4498-tkcr8" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tkcr8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1116,SelfLink:/api/v1/namespaces/deployment-1116/pods/nginx-deployment-7b8c6f4498-tkcr8,UID:b8af1905-f7cd-498c-a592-debd547a8012,ResourceVersion:17778454,Generation:0,CreationTimestamp:2019-12-23 14:48:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 946fe83b-6745-49f3-96c0-381f88099516 0xc002d401a7 0xc002d401a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zqbpm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zqbpm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zqbpm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d40220} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d40240}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:48:40 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:06 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:48:40 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2019-12-23 14:48:40 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-23 14:49:02 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://1c21cfce557d984959753212255ae68a2eedb07303dab44266d63fe24b5df9cb}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 23 14:49:15.333: INFO: Pod "nginx-deployment-7b8c6f4498-vlfps" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vlfps,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1116,SelfLink:/api/v1/namespaces/deployment-1116/pods/nginx-deployment-7b8c6f4498-vlfps,UID:47dec26c-8c38-45e5-b6ed-fd14b4ba4a9d,ResourceVersion:17778592,Generation:0,CreationTimestamp:2019-12-23 14:49:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 946fe83b-6745-49f3-96c0-381f88099516 0xc002d40317 0xc002d40318}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zqbpm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zqbpm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zqbpm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d40390} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d403b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:14 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 23 14:49:15.333: INFO: Pod "nginx-deployment-7b8c6f4498-wj66k" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wj66k,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1116,SelfLink:/api/v1/namespaces/deployment-1116/pods/nginx-deployment-7b8c6f4498-wj66k,UID:87d0f375-26c7-4ef7-9be6-16ecaddf785e,ResourceVersion:17778479,Generation:0,CreationTimestamp:2019-12-23 14:48:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 946fe83b-6745-49f3-96c0-381f88099516 0xc002d40437 0xc002d40438}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zqbpm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zqbpm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zqbpm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d404a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d404c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:48:40 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:08 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:49:08 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:48:40 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.8,StartTime:2019-12-23 14:48:40 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-23 14:49:07 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://33183f772723f95fd2f2d76a107c0eb94f9569851633ca7948e701a6d86ef38b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:49:15.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1116" for this suite.
Dec 23 14:50:06.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:50:07.127: INFO: namespace deployment-1116 deletion completed in 50.849689726s

• [SLOW TEST:87.014 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:50:07.127: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Dec 23 14:50:16.943: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:50:17.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6803" for this suite.
Dec 23 14:50:23.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:50:23.312: INFO: namespace container-runtime-6803 deletion completed in 6.251924566s

• [SLOW TEST:16.185 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:50:23.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Dec 23 14:50:23.510: INFO: Waiting up to 5m0s for pod "var-expansion-4d6c18b8-0105-46a1-b686-d1b9203c3d47" in namespace "var-expansion-5796" to be "success or failure"
Dec 23 14:50:23.514: INFO: Pod "var-expansion-4d6c18b8-0105-46a1-b686-d1b9203c3d47": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045379ms
Dec 23 14:50:25.523: INFO: Pod "var-expansion-4d6c18b8-0105-46a1-b686-d1b9203c3d47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012677511s
Dec 23 14:50:27.540: INFO: Pod "var-expansion-4d6c18b8-0105-46a1-b686-d1b9203c3d47": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02967931s
Dec 23 14:50:29.548: INFO: Pod "var-expansion-4d6c18b8-0105-46a1-b686-d1b9203c3d47": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038150713s
Dec 23 14:50:31.556: INFO: Pod "var-expansion-4d6c18b8-0105-46a1-b686-d1b9203c3d47": Phase="Pending", Reason="", readiness=false. Elapsed: 8.045920495s
Dec 23 14:50:33.568: INFO: Pod "var-expansion-4d6c18b8-0105-46a1-b686-d1b9203c3d47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.058144672s
STEP: Saw pod success
Dec 23 14:50:33.569: INFO: Pod "var-expansion-4d6c18b8-0105-46a1-b686-d1b9203c3d47" satisfied condition "success or failure"
Dec 23 14:50:33.573: INFO: Trying to get logs from node iruya-node pod var-expansion-4d6c18b8-0105-46a1-b686-d1b9203c3d47 container dapi-container: 
STEP: delete the pod
Dec 23 14:50:33.709: INFO: Waiting for pod var-expansion-4d6c18b8-0105-46a1-b686-d1b9203c3d47 to disappear
Dec 23 14:50:33.716: INFO: Pod var-expansion-4d6c18b8-0105-46a1-b686-d1b9203c3d47 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:50:33.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-5796" for this suite.
Dec 23 14:50:39.765: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:50:39.923: INFO: namespace var-expansion-5796 deletion completed in 6.200141085s

• [SLOW TEST:16.610 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:50:39.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Dec 23 14:50:49.516: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:50:49.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6957" for this suite.
Dec 23 14:50:55.590: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:50:55.743: INFO: namespace container-runtime-6957 deletion completed in 6.175039903s

• [SLOW TEST:15.819 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:50:55.743: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 23 14:50:55.933: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:50:57.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-4616" for this suite.
Dec 23 14:51:03.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:51:03.317: INFO: namespace custom-resource-definition-4616 deletion completed in 6.191022888s

• [SLOW TEST:7.574 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:51:03.319: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 23 14:51:03.525: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e4dc99df-e71b-4b5c-a673-73e182761fee" in namespace "downward-api-4913" to be "success or failure"
Dec 23 14:51:03.542: INFO: Pod "downwardapi-volume-e4dc99df-e71b-4b5c-a673-73e182761fee": Phase="Pending", Reason="", readiness=false. Elapsed: 17.177278ms
Dec 23 14:51:05.551: INFO: Pod "downwardapi-volume-e4dc99df-e71b-4b5c-a673-73e182761fee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026615832s
Dec 23 14:51:07.560: INFO: Pod "downwardapi-volume-e4dc99df-e71b-4b5c-a673-73e182761fee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035764956s
Dec 23 14:51:09.577: INFO: Pod "downwardapi-volume-e4dc99df-e71b-4b5c-a673-73e182761fee": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052742149s
Dec 23 14:51:11.586: INFO: Pod "downwardapi-volume-e4dc99df-e71b-4b5c-a673-73e182761fee": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061427401s
Dec 23 14:51:13.597: INFO: Pod "downwardapi-volume-e4dc99df-e71b-4b5c-a673-73e182761fee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.072003585s
STEP: Saw pod success
Dec 23 14:51:13.597: INFO: Pod "downwardapi-volume-e4dc99df-e71b-4b5c-a673-73e182761fee" satisfied condition "success or failure"
Dec 23 14:51:13.601: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-e4dc99df-e71b-4b5c-a673-73e182761fee container client-container: 
STEP: delete the pod
Dec 23 14:51:13.682: INFO: Waiting for pod downwardapi-volume-e4dc99df-e71b-4b5c-a673-73e182761fee to disappear
Dec 23 14:51:13.689: INFO: Pod downwardapi-volume-e4dc99df-e71b-4b5c-a673-73e182761fee no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:51:13.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4913" for this suite.
Dec 23 14:51:19.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:51:19.901: INFO: namespace downward-api-4913 deletion completed in 6.204110408s

• [SLOW TEST:16.583 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:51:19.902: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 23 14:51:20.033: INFO: Waiting up to 5m0s for pod "pod-a6b03c57-a03f-4a9c-a0dd-fd7ab4902caa" in namespace "emptydir-2469" to be "success or failure"
Dec 23 14:51:20.055: INFO: Pod "pod-a6b03c57-a03f-4a9c-a0dd-fd7ab4902caa": Phase="Pending", Reason="", readiness=false. Elapsed: 22.108155ms
Dec 23 14:51:22.064: INFO: Pod "pod-a6b03c57-a03f-4a9c-a0dd-fd7ab4902caa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03095891s
Dec 23 14:51:24.132: INFO: Pod "pod-a6b03c57-a03f-4a9c-a0dd-fd7ab4902caa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099320028s
Dec 23 14:51:26.153: INFO: Pod "pod-a6b03c57-a03f-4a9c-a0dd-fd7ab4902caa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119645274s
Dec 23 14:51:28.167: INFO: Pod "pod-a6b03c57-a03f-4a9c-a0dd-fd7ab4902caa": Phase="Pending", Reason="", readiness=false. Elapsed: 8.134000989s
Dec 23 14:51:30.178: INFO: Pod "pod-a6b03c57-a03f-4a9c-a0dd-fd7ab4902caa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.144721063s
STEP: Saw pod success
Dec 23 14:51:30.178: INFO: Pod "pod-a6b03c57-a03f-4a9c-a0dd-fd7ab4902caa" satisfied condition "success or failure"
Dec 23 14:51:30.189: INFO: Trying to get logs from node iruya-node pod pod-a6b03c57-a03f-4a9c-a0dd-fd7ab4902caa container test-container: 
STEP: delete the pod
Dec 23 14:51:30.245: INFO: Waiting for pod pod-a6b03c57-a03f-4a9c-a0dd-fd7ab4902caa to disappear
Dec 23 14:51:30.252: INFO: Pod pod-a6b03c57-a03f-4a9c-a0dd-fd7ab4902caa no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:51:30.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2469" for this suite.
Dec 23 14:51:36.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:51:36.428: INFO: namespace emptydir-2469 deletion completed in 6.170883941s

• [SLOW TEST:16.526 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:51:36.428: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:51:46.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1491" for this suite.
Dec 23 14:52:28.696: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:52:28.973: INFO: namespace kubelet-test-1491 deletion completed in 42.302820934s

• [SLOW TEST:52.545 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:52:28.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 23 14:52:29.096: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dc922543-fad0-41f9-a841-add7c20b948a" in namespace "downward-api-3666" to be "success or failure"
Dec 23 14:52:29.113: INFO: Pod "downwardapi-volume-dc922543-fad0-41f9-a841-add7c20b948a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.612315ms
Dec 23 14:52:31.121: INFO: Pod "downwardapi-volume-dc922543-fad0-41f9-a841-add7c20b948a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024943828s
Dec 23 14:52:33.133: INFO: Pod "downwardapi-volume-dc922543-fad0-41f9-a841-add7c20b948a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036238919s
Dec 23 14:52:35.143: INFO: Pod "downwardapi-volume-dc922543-fad0-41f9-a841-add7c20b948a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046557487s
Dec 23 14:52:37.172: INFO: Pod "downwardapi-volume-dc922543-fad0-41f9-a841-add7c20b948a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.075675789s
STEP: Saw pod success
Dec 23 14:52:37.172: INFO: Pod "downwardapi-volume-dc922543-fad0-41f9-a841-add7c20b948a" satisfied condition "success or failure"
Dec 23 14:52:37.177: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-dc922543-fad0-41f9-a841-add7c20b948a container client-container: 
STEP: delete the pod
Dec 23 14:52:37.263: INFO: Waiting for pod downwardapi-volume-dc922543-fad0-41f9-a841-add7c20b948a to disappear
Dec 23 14:52:37.402: INFO: Pod downwardapi-volume-dc922543-fad0-41f9-a841-add7c20b948a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:52:37.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3666" for this suite.
Dec 23 14:52:43.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:52:43.587: INFO: namespace downward-api-3666 deletion completed in 6.170345169s

• [SLOW TEST:14.613 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:52:43.588: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 23 14:52:43.765: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Dec 23 14:52:43.842: INFO: Pod name sample-pod: Found 0 pods out of 1
Dec 23 14:52:48.855: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 23 14:52:50.875: INFO: Creating deployment "test-rolling-update-deployment"
Dec 23 14:52:50.888: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Dec 23 14:52:50.899: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Dec 23 14:52:52.919: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Dec 23 14:52:52.923: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712709570, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712709570, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712709571, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712709570, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 23 14:52:54.947: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712709570, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712709570, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712709571, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712709570, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 23 14:52:56.952: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712709570, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712709570, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712709571, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712709570, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 23 14:52:58.930: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 23 14:52:58.941: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-7037,SelfLink:/apis/apps/v1/namespaces/deployment-7037/deployments/test-rolling-update-deployment,UID:1165c080-8b67-4d8b-aecb-53cea95cc9bb,ResourceVersion:17779288,Generation:1,CreationTimestamp:2019-12-23 14:52:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-23 14:52:50 +0000 UTC 2019-12-23 14:52:50 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-23 14:52:58 +0000 UTC 2019-12-23 14:52:50 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Dec 23 14:52:58.944: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-7037,SelfLink:/apis/apps/v1/namespaces/deployment-7037/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:a6b5b97e-bef3-45c8-b8e4-aebbd812e889,ResourceVersion:17779278,Generation:1,CreationTimestamp:2019-12-23 14:52:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 1165c080-8b67-4d8b-aecb-53cea95cc9bb 0xc0021bc497 0xc0021bc498}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 23 14:52:58.944: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Dec 23 14:52:58.945: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-7037,SelfLink:/apis/apps/v1/namespaces/deployment-7037/replicasets/test-rolling-update-controller,UID:b1294eb1-e9d5-4708-bda1-5b7e875c1782,ResourceVersion:17779287,Generation:2,CreationTimestamp:2019-12-23 14:52:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 1165c080-8b67-4d8b-aecb-53cea95cc9bb 0xc0021bc3c7 0xc0021bc3c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 23 14:52:58.949: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-x8jw5" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-x8jw5,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-7037,SelfLink:/api/v1/namespaces/deployment-7037/pods/test-rolling-update-deployment-79f6b9d75c-x8jw5,UID:3644d083-1412-48b7-a92a-1bfc806264bd,ResourceVersion:17779277,Generation:0,CreationTimestamp:2019-12-23 14:52:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c a6b5b97e-bef3-45c8-b8e4-aebbd812e889 0xc0021bcda7 0xc0021bcda8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tdjpx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tdjpx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-tdjpx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021bce20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021bce40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:52:51 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:52:57 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:52:57 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-23 14:52:50 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2019-12-23 14:52:51 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-23 14:52:57 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://5eb48e07cbdcbcaeff54eaf15a29b5c08c47f5d38e6467ded74db8c4436a2839}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:52:58.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7037" for this suite.
Dec 23 14:53:04.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:53:05.080: INFO: namespace deployment-7037 deletion completed in 6.126226306s

• [SLOW TEST:21.492 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:53:05.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Dec 23 14:53:05.372: INFO: Waiting up to 5m0s for pod "var-expansion-6c908adb-5e57-4efc-993a-5bb6b0a83d01" in namespace "var-expansion-3289" to be "success or failure"
Dec 23 14:53:05.386: INFO: Pod "var-expansion-6c908adb-5e57-4efc-993a-5bb6b0a83d01": Phase="Pending", Reason="", readiness=false. Elapsed: 13.12099ms
Dec 23 14:53:07.396: INFO: Pod "var-expansion-6c908adb-5e57-4efc-993a-5bb6b0a83d01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02308988s
Dec 23 14:53:09.406: INFO: Pod "var-expansion-6c908adb-5e57-4efc-993a-5bb6b0a83d01": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03369296s
Dec 23 14:53:11.416: INFO: Pod "var-expansion-6c908adb-5e57-4efc-993a-5bb6b0a83d01": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043044213s
Dec 23 14:53:13.431: INFO: Pod "var-expansion-6c908adb-5e57-4efc-993a-5bb6b0a83d01": Phase="Pending", Reason="", readiness=false. Elapsed: 8.058961636s
Dec 23 14:53:15.448: INFO: Pod "var-expansion-6c908adb-5e57-4efc-993a-5bb6b0a83d01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.075673477s
STEP: Saw pod success
Dec 23 14:53:15.448: INFO: Pod "var-expansion-6c908adb-5e57-4efc-993a-5bb6b0a83d01" satisfied condition "success or failure"
Dec 23 14:53:15.453: INFO: Trying to get logs from node iruya-node pod var-expansion-6c908adb-5e57-4efc-993a-5bb6b0a83d01 container dapi-container: 
STEP: delete the pod
Dec 23 14:53:15.552: INFO: Waiting for pod var-expansion-6c908adb-5e57-4efc-993a-5bb6b0a83d01 to disappear
Dec 23 14:53:15.657: INFO: Pod var-expansion-6c908adb-5e57-4efc-993a-5bb6b0a83d01 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:53:15.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3289" for this suite.
Dec 23 14:53:21.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:53:21.873: INFO: namespace var-expansion-3289 deletion completed in 6.19333051s

• [SLOW TEST:16.793 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:53:21.873: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Dec 23 14:53:21.980: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 23 14:53:21.988: INFO: Waiting for terminating namespaces to be deleted...
Dec 23 14:53:21.989: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Dec 23 14:53:21.999: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Dec 23 14:53:21.999: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 23 14:53:21.999: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Dec 23 14:53:21.999: INFO: 	Container weave ready: true, restart count 0
Dec 23 14:53:21.999: INFO: 	Container weave-npc ready: true, restart count 0
Dec 23 14:53:21.999: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Dec 23 14:53:22.035: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Dec 23 14:53:22.035: INFO: 	Container kube-apiserver ready: true, restart count 0
Dec 23 14:53:22.035: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 23 14:53:22.035: INFO: 	Container coredns ready: true, restart count 0
Dec 23 14:53:22.035: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Dec 23 14:53:22.035: INFO: 	Container kube-scheduler ready: true, restart count 7
Dec 23 14:53:22.035: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Dec 23 14:53:22.035: INFO: 	Container weave ready: true, restart count 0
Dec 23 14:53:22.035: INFO: 	Container weave-npc ready: true, restart count 0
Dec 23 14:53:22.035: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 23 14:53:22.035: INFO: 	Container coredns ready: true, restart count 0
Dec 23 14:53:22.035: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Dec 23 14:53:22.035: INFO: 	Container etcd ready: true, restart count 0
Dec 23 14:53:22.035: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Dec 23 14:53:22.035: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 23 14:53:22.035: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Dec 23 14:53:22.035: INFO: 	Container kube-controller-manager ready: true, restart count 10
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-65df1133-0914-4d1c-9e85-58e29080b9b0 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-65df1133-0914-4d1c-9e85-58e29080b9b0 off the node iruya-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-65df1133-0914-4d1c-9e85-58e29080b9b0
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:53:40.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1987" for this suite.
Dec 23 14:54:00.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:54:00.381: INFO: namespace sched-pred-1987 deletion completed in 20.160634857s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:38.507 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:54:00.381: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Dec 23 14:54:00.529: INFO: Waiting up to 5m0s for pod "pod-0a324c5d-64a9-462d-8e24-4264bb33e461" in namespace "emptydir-5639" to be "success or failure"
Dec 23 14:54:00.537: INFO: Pod "pod-0a324c5d-64a9-462d-8e24-4264bb33e461": Phase="Pending", Reason="", readiness=false. Elapsed: 7.45505ms
Dec 23 14:54:02.564: INFO: Pod "pod-0a324c5d-64a9-462d-8e24-4264bb33e461": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034290348s
Dec 23 14:54:04.581: INFO: Pod "pod-0a324c5d-64a9-462d-8e24-4264bb33e461": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051456863s
Dec 23 14:54:06.595: INFO: Pod "pod-0a324c5d-64a9-462d-8e24-4264bb33e461": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06554087s
Dec 23 14:54:08.607: INFO: Pod "pod-0a324c5d-64a9-462d-8e24-4264bb33e461": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.077375653s
STEP: Saw pod success
Dec 23 14:54:08.607: INFO: Pod "pod-0a324c5d-64a9-462d-8e24-4264bb33e461" satisfied condition "success or failure"
Dec 23 14:54:08.614: INFO: Trying to get logs from node iruya-node pod pod-0a324c5d-64a9-462d-8e24-4264bb33e461 container test-container: 
STEP: delete the pod
Dec 23 14:54:08.742: INFO: Waiting for pod pod-0a324c5d-64a9-462d-8e24-4264bb33e461 to disappear
Dec 23 14:54:08.752: INFO: Pod pod-0a324c5d-64a9-462d-8e24-4264bb33e461 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:54:08.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5639" for this suite.
Dec 23 14:54:14.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:54:14.926: INFO: namespace emptydir-5639 deletion completed in 6.162157028s

• [SLOW TEST:14.545 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:54:14.927: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:54:15.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8764" for this suite.
Dec 23 14:54:37.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:54:37.265: INFO: namespace pods-8764 deletion completed in 22.221893373s

• [SLOW TEST:22.339 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:54:37.266: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-60b096e9-dd0e-4ff2-87c5-1258e9008a47
STEP: Creating a pod to test consume configMaps
Dec 23 14:54:37.451: INFO: Waiting up to 5m0s for pod "pod-configmaps-128d8de9-73de-47ca-91a7-2c26b8236ab9" in namespace "configmap-1016" to be "success or failure"
Dec 23 14:54:37.461: INFO: Pod "pod-configmaps-128d8de9-73de-47ca-91a7-2c26b8236ab9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.539475ms
Dec 23 14:54:39.498: INFO: Pod "pod-configmaps-128d8de9-73de-47ca-91a7-2c26b8236ab9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046892407s
Dec 23 14:54:41.537: INFO: Pod "pod-configmaps-128d8de9-73de-47ca-91a7-2c26b8236ab9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086659729s
Dec 23 14:54:43.558: INFO: Pod "pod-configmaps-128d8de9-73de-47ca-91a7-2c26b8236ab9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10756777s
Dec 23 14:54:45.616: INFO: Pod "pod-configmaps-128d8de9-73de-47ca-91a7-2c26b8236ab9": Phase="Running", Reason="", readiness=true. Elapsed: 8.165435068s
Dec 23 14:54:47.629: INFO: Pod "pod-configmaps-128d8de9-73de-47ca-91a7-2c26b8236ab9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.178419698s
STEP: Saw pod success
Dec 23 14:54:47.630: INFO: Pod "pod-configmaps-128d8de9-73de-47ca-91a7-2c26b8236ab9" satisfied condition "success or failure"
Dec 23 14:54:47.638: INFO: Trying to get logs from node iruya-node pod pod-configmaps-128d8de9-73de-47ca-91a7-2c26b8236ab9 container configmap-volume-test: 
STEP: delete the pod
Dec 23 14:54:47.701: INFO: Waiting for pod pod-configmaps-128d8de9-73de-47ca-91a7-2c26b8236ab9 to disappear
Dec 23 14:54:47.718: INFO: Pod pod-configmaps-128d8de9-73de-47ca-91a7-2c26b8236ab9 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:54:47.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1016" for this suite.
Dec 23 14:54:53.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:54:54.004: INFO: namespace configmap-1016 deletion completed in 6.277560189s

• [SLOW TEST:16.738 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:54:54.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-70e10e07-66a8-490a-af5d-01cc314a9377
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:54:54.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5833" for this suite.
Dec 23 14:55:00.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:55:00.246: INFO: namespace configmap-5833 deletion completed in 6.124774601s

• [SLOW TEST:6.240 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:55:00.246: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-5980, will wait for the garbage collector to delete the pods
Dec 23 14:55:10.463: INFO: Deleting Job.batch foo took: 19.646181ms
Dec 23 14:55:10.764: INFO: Terminating Job.batch foo pods took: 301.02724ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:55:56.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-5980" for this suite.
Dec 23 14:56:02.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:56:02.870: INFO: namespace job-5980 deletion completed in 6.263330646s

• [SLOW TEST:62.624 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:56:02.872: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-30756fa9-6b95-4ab6-833b-176d9cfd1d4f
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-30756fa9-6b95-4ab6-833b-176d9cfd1d4f
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:56:13.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6144" for this suite.
Dec 23 14:56:35.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:56:35.503: INFO: namespace configmap-6144 deletion completed in 22.197012311s

• [SLOW TEST:32.632 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:56:35.505: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:56:41.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-7743" for this suite.
Dec 23 14:56:47.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:56:48.040: INFO: namespace namespaces-7743 deletion completed in 6.131175961s
STEP: Destroying namespace "nsdeletetest-6806" for this suite.
Dec 23 14:56:48.042: INFO: Namespace nsdeletetest-6806 was already deleted
STEP: Destroying namespace "nsdeletetest-4049" for this suite.
Dec 23 14:56:54.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:56:54.170: INFO: namespace nsdeletetest-4049 deletion completed in 6.127833879s

• [SLOW TEST:18.665 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:56:54.170: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Dec 23 14:56:54.258: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5232,SelfLink:/api/v1/namespaces/watch-5232/configmaps/e2e-watch-test-label-changed,UID:6b83f337-469d-475b-9dad-62fb85b5cc5f,ResourceVersion:17779875,Generation:0,CreationTimestamp:2019-12-23 14:56:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 23 14:56:54.260: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5232,SelfLink:/api/v1/namespaces/watch-5232/configmaps/e2e-watch-test-label-changed,UID:6b83f337-469d-475b-9dad-62fb85b5cc5f,ResourceVersion:17779876,Generation:0,CreationTimestamp:2019-12-23 14:56:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Dec 23 14:56:54.260: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5232,SelfLink:/api/v1/namespaces/watch-5232/configmaps/e2e-watch-test-label-changed,UID:6b83f337-469d-475b-9dad-62fb85b5cc5f,ResourceVersion:17779877,Generation:0,CreationTimestamp:2019-12-23 14:56:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Dec 23 14:57:04.419: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5232,SelfLink:/api/v1/namespaces/watch-5232/configmaps/e2e-watch-test-label-changed,UID:6b83f337-469d-475b-9dad-62fb85b5cc5f,ResourceVersion:17779892,Generation:0,CreationTimestamp:2019-12-23 14:56:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 23 14:57:04.419: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5232,SelfLink:/api/v1/namespaces/watch-5232/configmaps/e2e-watch-test-label-changed,UID:6b83f337-469d-475b-9dad-62fb85b5cc5f,ResourceVersion:17779893,Generation:0,CreationTimestamp:2019-12-23 14:56:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Dec 23 14:57:04.419: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5232,SelfLink:/api/v1/namespaces/watch-5232/configmaps/e2e-watch-test-label-changed,UID:6b83f337-469d-475b-9dad-62fb85b5cc5f,ResourceVersion:17779894,Generation:0,CreationTimestamp:2019-12-23 14:56:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:57:04.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5232" for this suite.
Dec 23 14:57:10.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:57:10.670: INFO: namespace watch-5232 deletion completed in 6.235650184s

• [SLOW TEST:16.500 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:57:10.671: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Dec 23 14:57:11.054: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-8843,SelfLink:/api/v1/namespaces/watch-8843/configmaps/e2e-watch-test-resource-version,UID:831526e3-abbc-4b68-afad-af4963e2e523,ResourceVersion:17779915,Generation:0,CreationTimestamp:2019-12-23 14:57:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 23 14:57:11.054: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-8843,SelfLink:/api/v1/namespaces/watch-8843/configmaps/e2e-watch-test-resource-version,UID:831526e3-abbc-4b68-afad-af4963e2e523,ResourceVersion:17779916,Generation:0,CreationTimestamp:2019-12-23 14:57:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:57:11.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8843" for this suite.
Dec 23 14:57:17.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:57:17.394: INFO: namespace watch-8843 deletion completed in 6.331941613s

• [SLOW TEST:6.723 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:57:17.394: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-53f4ab02-73e8-4e5c-bb32-bb69816978ef
STEP: Creating a pod to test consume configMaps
Dec 23 14:57:17.587: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0add00a1-d056-4edc-a7c8-1940d6dceafe" in namespace "projected-6142" to be "success or failure"
Dec 23 14:57:17.595: INFO: Pod "pod-projected-configmaps-0add00a1-d056-4edc-a7c8-1940d6dceafe": Phase="Pending", Reason="", readiness=false. Elapsed: 8.151853ms
Dec 23 14:57:19.603: INFO: Pod "pod-projected-configmaps-0add00a1-d056-4edc-a7c8-1940d6dceafe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015771657s
Dec 23 14:57:21.608: INFO: Pod "pod-projected-configmaps-0add00a1-d056-4edc-a7c8-1940d6dceafe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02164677s
Dec 23 14:57:23.624: INFO: Pod "pod-projected-configmaps-0add00a1-d056-4edc-a7c8-1940d6dceafe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036740963s
Dec 23 14:57:25.693: INFO: Pod "pod-projected-configmaps-0add00a1-d056-4edc-a7c8-1940d6dceafe": Phase="Pending", Reason="", readiness=false. Elapsed: 8.105879814s
Dec 23 14:57:27.703: INFO: Pod "pod-projected-configmaps-0add00a1-d056-4edc-a7c8-1940d6dceafe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.115957692s
STEP: Saw pod success
Dec 23 14:57:27.703: INFO: Pod "pod-projected-configmaps-0add00a1-d056-4edc-a7c8-1940d6dceafe" satisfied condition "success or failure"
Dec 23 14:57:27.707: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-0add00a1-d056-4edc-a7c8-1940d6dceafe container projected-configmap-volume-test: 
STEP: delete the pod
Dec 23 14:57:27.848: INFO: Waiting for pod pod-projected-configmaps-0add00a1-d056-4edc-a7c8-1940d6dceafe to disappear
Dec 23 14:57:27.869: INFO: Pod pod-projected-configmaps-0add00a1-d056-4edc-a7c8-1940d6dceafe no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:57:27.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6142" for this suite.
Dec 23 14:57:33.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:57:34.072: INFO: namespace projected-6142 deletion completed in 6.186236024s

• [SLOW TEST:16.678 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:57:34.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 23 14:57:34.174: INFO: Waiting up to 5m0s for pod "pod-84d4d22b-1091-4cea-8bba-a9e14e2999f2" in namespace "emptydir-5981" to be "success or failure"
Dec 23 14:57:34.181: INFO: Pod "pod-84d4d22b-1091-4cea-8bba-a9e14e2999f2": Phase="Pending", Reason="", readiness=false. Elapsed: 7.058491ms
Dec 23 14:57:36.191: INFO: Pod "pod-84d4d22b-1091-4cea-8bba-a9e14e2999f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016468396s
Dec 23 14:57:38.197: INFO: Pod "pod-84d4d22b-1091-4cea-8bba-a9e14e2999f2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022691085s
Dec 23 14:57:40.205: INFO: Pod "pod-84d4d22b-1091-4cea-8bba-a9e14e2999f2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030445553s
Dec 23 14:57:42.222: INFO: Pod "pod-84d4d22b-1091-4cea-8bba-a9e14e2999f2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047923309s
Dec 23 14:57:44.235: INFO: Pod "pod-84d4d22b-1091-4cea-8bba-a9e14e2999f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.061148103s
STEP: Saw pod success
Dec 23 14:57:44.236: INFO: Pod "pod-84d4d22b-1091-4cea-8bba-a9e14e2999f2" satisfied condition "success or failure"
Dec 23 14:57:44.240: INFO: Trying to get logs from node iruya-node pod pod-84d4d22b-1091-4cea-8bba-a9e14e2999f2 container test-container: 
STEP: delete the pod
Dec 23 14:57:44.434: INFO: Waiting for pod pod-84d4d22b-1091-4cea-8bba-a9e14e2999f2 to disappear
Dec 23 14:57:44.447: INFO: Pod pod-84d4d22b-1091-4cea-8bba-a9e14e2999f2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:57:44.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5981" for this suite.
Dec 23 14:57:50.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:57:50.660: INFO: namespace emptydir-5981 deletion completed in 6.20014024s

• [SLOW TEST:16.585 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:57:50.661: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-4cc5078b-2c38-4067-9c2b-58a7c602c154
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:57:50.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1323" for this suite.
Dec 23 14:57:56.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:57:56.968: INFO: namespace secrets-1323 deletion completed in 6.190713202s

• [SLOW TEST:6.308 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:57:56.969: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-7777
I1223 14:57:57.041512       8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-7777, replica count: 1
I1223 14:57:58.093951       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1223 14:57:59.094771       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1223 14:58:00.095680       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1223 14:58:01.096887       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1223 14:58:02.097655       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1223 14:58:03.098683       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1223 14:58:04.099632       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1223 14:58:05.100360       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 23 14:58:05.245: INFO: Created: latency-svc-rnzjc
Dec 23 14:58:05.271: INFO: Got endpoints: latency-svc-rnzjc [69.699473ms]
Dec 23 14:58:05.435: INFO: Created: latency-svc-qltt4
Dec 23 14:58:05.444: INFO: Got endpoints: latency-svc-qltt4 [171.4574ms]
Dec 23 14:58:05.492: INFO: Created: latency-svc-sxmtb
Dec 23 14:58:05.493: INFO: Got endpoints: latency-svc-sxmtb [220.287761ms]
Dec 23 14:58:05.595: INFO: Created: latency-svc-phz65
Dec 23 14:58:05.607: INFO: Got endpoints: latency-svc-phz65 [334.040083ms]
Dec 23 14:58:05.661: INFO: Created: latency-svc-4znr8
Dec 23 14:58:05.682: INFO: Got endpoints: latency-svc-4znr8 [408.996252ms]
Dec 23 14:58:05.804: INFO: Created: latency-svc-2jr5k
Dec 23 14:58:05.809: INFO: Got endpoints: latency-svc-2jr5k [536.673438ms]
Dec 23 14:58:05.859: INFO: Created: latency-svc-8xwh5
Dec 23 14:58:05.997: INFO: Got endpoints: latency-svc-8xwh5 [723.765202ms]
Dec 23 14:58:05.999: INFO: Created: latency-svc-blk8h
Dec 23 14:58:06.056: INFO: Got endpoints: latency-svc-blk8h [783.372419ms]
Dec 23 14:58:06.075: INFO: Created: latency-svc-smfr5
Dec 23 14:58:06.079: INFO: Got endpoints: latency-svc-smfr5 [805.686392ms]
Dec 23 14:58:06.259: INFO: Created: latency-svc-br56s
Dec 23 14:58:06.278: INFO: Got endpoints: latency-svc-br56s [1.004555895s]
Dec 23 14:58:06.455: INFO: Created: latency-svc-rctjv
Dec 23 14:58:06.487: INFO: Got endpoints: latency-svc-rctjv [407.886517ms]
Dec 23 14:58:06.522: INFO: Created: latency-svc-ktjbr
Dec 23 14:58:06.553: INFO: Got endpoints: latency-svc-ktjbr [1.279071617s]
Dec 23 14:58:06.648: INFO: Created: latency-svc-s2mjj
Dec 23 14:58:06.687: INFO: Created: latency-svc-n26hf
Dec 23 14:58:06.696: INFO: Got endpoints: latency-svc-s2mjj [1.422302865s]
Dec 23 14:58:06.718: INFO: Got endpoints: latency-svc-n26hf [1.444251796s]
Dec 23 14:58:06.820: INFO: Created: latency-svc-z2qzh
Dec 23 14:58:06.845: INFO: Got endpoints: latency-svc-z2qzh [1.571447939s]
Dec 23 14:58:06.896: INFO: Created: latency-svc-r7s57
Dec 23 14:58:07.021: INFO: Got endpoints: latency-svc-r7s57 [1.74753775s]
Dec 23 14:58:07.029: INFO: Created: latency-svc-m5jkw
Dec 23 14:58:07.036: INFO: Got endpoints: latency-svc-m5jkw [1.762742645s]
Dec 23 14:58:07.079: INFO: Created: latency-svc-kj5kd
Dec 23 14:58:07.080: INFO: Got endpoints: latency-svc-kj5kd [1.63646785s]
Dec 23 14:58:07.311: INFO: Created: latency-svc-nz6cn
Dec 23 14:58:07.342: INFO: Got endpoints: latency-svc-nz6cn [1.848761255s]
Dec 23 14:58:07.406: INFO: Created: latency-svc-7hb9q
Dec 23 14:58:07.571: INFO: Got endpoints: latency-svc-7hb9q [1.963772768s]
Dec 23 14:58:07.602: INFO: Created: latency-svc-x77gp
Dec 23 14:58:07.608: INFO: Got endpoints: latency-svc-x77gp [1.925715046s]
Dec 23 14:58:07.670: INFO: Created: latency-svc-v98hg
Dec 23 14:58:07.867: INFO: Got endpoints: latency-svc-v98hg [2.058266961s]
Dec 23 14:58:08.182: INFO: Created: latency-svc-hdds8
Dec 23 14:58:08.187: INFO: Got endpoints: latency-svc-hdds8 [2.189538919s]
Dec 23 14:58:08.249: INFO: Created: latency-svc-t99dq
Dec 23 14:58:08.486: INFO: Got endpoints: latency-svc-t99dq [2.428646478s]
Dec 23 14:58:08.519: INFO: Created: latency-svc-lx6cw
Dec 23 14:58:08.554: INFO: Got endpoints: latency-svc-lx6cw [2.275424821s]
Dec 23 14:58:08.727: INFO: Created: latency-svc-945l5
Dec 23 14:58:08.789: INFO: Got endpoints: latency-svc-945l5 [2.301065577s]
Dec 23 14:58:08.792: INFO: Created: latency-svc-5slwh
Dec 23 14:58:08.818: INFO: Got endpoints: latency-svc-5slwh [2.265065401s]
Dec 23 14:58:08.974: INFO: Created: latency-svc-bmhxz
Dec 23 14:58:08.994: INFO: Got endpoints: latency-svc-bmhxz [2.298003402s]
Dec 23 14:58:09.051: INFO: Created: latency-svc-b6z4n
Dec 23 14:58:09.063: INFO: Got endpoints: latency-svc-b6z4n [2.345183735s]
Dec 23 14:58:09.178: INFO: Created: latency-svc-2sn4q
Dec 23 14:58:09.188: INFO: Got endpoints: latency-svc-2sn4q [2.342309221s]
Dec 23 14:58:09.223: INFO: Created: latency-svc-h4q6p
Dec 23 14:58:09.229: INFO: Got endpoints: latency-svc-h4q6p [2.20760658s]
Dec 23 14:58:09.269: INFO: Created: latency-svc-5fzbl
Dec 23 14:58:09.379: INFO: Got endpoints: latency-svc-5fzbl [2.342754405s]
Dec 23 14:58:09.394: INFO: Created: latency-svc-67js6
Dec 23 14:58:09.413: INFO: Got endpoints: latency-svc-67js6 [2.332927251s]
Dec 23 14:58:09.440: INFO: Created: latency-svc-jn2kd
Dec 23 14:58:09.458: INFO: Got endpoints: latency-svc-jn2kd [2.115587777s]
Dec 23 14:58:09.552: INFO: Created: latency-svc-tk2g2
Dec 23 14:58:09.562: INFO: Got endpoints: latency-svc-tk2g2 [1.990914499s]
Dec 23 14:58:09.624: INFO: Created: latency-svc-kb87m
Dec 23 14:58:09.634: INFO: Got endpoints: latency-svc-kb87m [2.026452681s]
Dec 23 14:58:09.707: INFO: Created: latency-svc-zjd4w
Dec 23 14:58:09.729: INFO: Got endpoints: latency-svc-zjd4w [1.860952092s]
Dec 23 14:58:09.803: INFO: Created: latency-svc-9mjcj
Dec 23 14:58:09.877: INFO: Got endpoints: latency-svc-9mjcj [1.688856733s]
Dec 23 14:58:09.920: INFO: Created: latency-svc-klhrj
Dec 23 14:58:09.943: INFO: Got endpoints: latency-svc-klhrj [1.457131378s]
Dec 23 14:58:10.055: INFO: Created: latency-svc-nj8pk
Dec 23 14:58:10.057: INFO: Got endpoints: latency-svc-nj8pk [1.502906294s]
Dec 23 14:58:10.100: INFO: Created: latency-svc-2pvgk
Dec 23 14:58:10.105: INFO: Got endpoints: latency-svc-2pvgk [1.314875291s]
Dec 23 14:58:10.136: INFO: Created: latency-svc-4pfkv
Dec 23 14:58:10.228: INFO: Got endpoints: latency-svc-4pfkv [1.408623982s]
Dec 23 14:58:10.240: INFO: Created: latency-svc-wt9cg
Dec 23 14:58:10.255: INFO: Got endpoints: latency-svc-wt9cg [1.260301311s]
Dec 23 14:58:10.295: INFO: Created: latency-svc-57zhr
Dec 23 14:58:10.306: INFO: Got endpoints: latency-svc-57zhr [1.241701524s]
Dec 23 14:58:10.400: INFO: Created: latency-svc-9mpz6
Dec 23 14:58:10.448: INFO: Got endpoints: latency-svc-9mpz6 [1.260112611s]
Dec 23 14:58:10.451: INFO: Created: latency-svc-t4vrz
Dec 23 14:58:10.467: INFO: Got endpoints: latency-svc-t4vrz [1.238461845s]
Dec 23 14:58:10.573: INFO: Created: latency-svc-j9lfn
Dec 23 14:58:10.612: INFO: Got endpoints: latency-svc-j9lfn [1.231849722s]
Dec 23 14:58:10.620: INFO: Created: latency-svc-cqcss
Dec 23 14:58:10.646: INFO: Got endpoints: latency-svc-cqcss [1.23158368s]
Dec 23 14:58:10.744: INFO: Created: latency-svc-bt79p
Dec 23 14:58:10.754: INFO: Got endpoints: latency-svc-bt79p [1.296613506s]
Dec 23 14:58:10.788: INFO: Created: latency-svc-59rvd
Dec 23 14:58:10.793: INFO: Got endpoints: latency-svc-59rvd [1.230633991s]
Dec 23 14:58:10.971: INFO: Created: latency-svc-cnt9r
Dec 23 14:58:11.000: INFO: Got endpoints: latency-svc-cnt9r [1.365261208s]
Dec 23 14:58:11.032: INFO: Created: latency-svc-m4vnb
Dec 23 14:58:11.179: INFO: Got endpoints: latency-svc-m4vnb [1.448704032s]
Dec 23 14:58:11.189: INFO: Created: latency-svc-4k457
Dec 23 14:58:11.192: INFO: Got endpoints: latency-svc-4k457 [1.314879501s]
Dec 23 14:58:11.248: INFO: Created: latency-svc-7dx2s
Dec 23 14:58:11.336: INFO: Created: latency-svc-trlzc
Dec 23 14:58:11.338: INFO: Got endpoints: latency-svc-7dx2s [1.394193178s]
Dec 23 14:58:11.351: INFO: Got endpoints: latency-svc-trlzc [1.293656831s]
Dec 23 14:58:11.401: INFO: Created: latency-svc-wgpqs
Dec 23 14:58:11.410: INFO: Got endpoints: latency-svc-wgpqs [1.304994423s]
Dec 23 14:58:11.545: INFO: Created: latency-svc-krqpv
Dec 23 14:58:11.551: INFO: Got endpoints: latency-svc-krqpv [1.322669021s]
Dec 23 14:58:11.597: INFO: Created: latency-svc-xj7qw
Dec 23 14:58:11.603: INFO: Got endpoints: latency-svc-xj7qw [1.347886409s]
Dec 23 14:58:11.698: INFO: Created: latency-svc-ftd8v
Dec 23 14:58:11.701: INFO: Got endpoints: latency-svc-ftd8v [1.395327849s]
Dec 23 14:58:11.744: INFO: Created: latency-svc-ht4ql
Dec 23 14:58:11.774: INFO: Got endpoints: latency-svc-ht4ql [1.325650668s]
Dec 23 14:58:11.802: INFO: Created: latency-svc-mpktq
Dec 23 14:58:11.891: INFO: Created: latency-svc-j7xgd
Dec 23 14:58:11.891: INFO: Got endpoints: latency-svc-mpktq [1.423202885s]
Dec 23 14:58:11.930: INFO: Got endpoints: latency-svc-j7xgd [1.317566762s]
Dec 23 14:58:11.977: INFO: Created: latency-svc-2szqr
Dec 23 14:58:12.063: INFO: Got endpoints: latency-svc-2szqr [1.417115718s]
Dec 23 14:58:12.074: INFO: Created: latency-svc-cw5jb
Dec 23 14:58:12.089: INFO: Got endpoints: latency-svc-cw5jb [1.334439456s]
Dec 23 14:58:12.145: INFO: Created: latency-svc-hqdp4
Dec 23 14:58:12.249: INFO: Got endpoints: latency-svc-hqdp4 [1.455847953s]
Dec 23 14:58:12.267: INFO: Created: latency-svc-vshkw
Dec 23 14:58:12.283: INFO: Got endpoints: latency-svc-vshkw [1.282029441s]
Dec 23 14:58:12.407: INFO: Created: latency-svc-rqjg6
Dec 23 14:58:12.431: INFO: Got endpoints: latency-svc-rqjg6 [1.251619196s]
Dec 23 14:58:12.592: INFO: Created: latency-svc-tgh69
Dec 23 14:58:12.638: INFO: Got endpoints: latency-svc-tgh69 [1.44544338s]
Dec 23 14:58:12.654: INFO: Created: latency-svc-ttrr5
Dec 23 14:58:12.654: INFO: Got endpoints: latency-svc-ttrr5 [1.315262688s]
Dec 23 14:58:12.684: INFO: Created: latency-svc-4825p
Dec 23 14:58:12.760: INFO: Got endpoints: latency-svc-4825p [1.408181761s]
Dec 23 14:58:12.781: INFO: Created: latency-svc-gq58t
Dec 23 14:58:12.789: INFO: Got endpoints: latency-svc-gq58t [1.378592436s]
Dec 23 14:58:12.826: INFO: Created: latency-svc-v9ktj
Dec 23 14:58:12.841: INFO: Got endpoints: latency-svc-v9ktj [1.28901119s]
Dec 23 14:58:12.957: INFO: Created: latency-svc-zm9zm
Dec 23 14:58:12.964: INFO: Got endpoints: latency-svc-zm9zm [1.360485355s]
Dec 23 14:58:13.016: INFO: Created: latency-svc-2d445
Dec 23 14:58:13.035: INFO: Got endpoints: latency-svc-2d445 [1.333922694s]
Dec 23 14:58:13.041: INFO: Created: latency-svc-5wlxm
Dec 23 14:58:13.619: INFO: Got endpoints: latency-svc-5wlxm [1.844795634s]
Dec 23 14:58:13.636: INFO: Created: latency-svc-m64wv
Dec 23 14:58:13.660: INFO: Got endpoints: latency-svc-m64wv [1.768402794s]
Dec 23 14:58:13.720: INFO: Created: latency-svc-49vtc
Dec 23 14:58:13.784: INFO: Got endpoints: latency-svc-49vtc [1.853397812s]
Dec 23 14:58:13.820: INFO: Created: latency-svc-rqrsn
Dec 23 14:58:13.831: INFO: Got endpoints: latency-svc-rqrsn [1.767188472s]
Dec 23 14:58:13.900: INFO: Created: latency-svc-nnmtv
Dec 23 14:58:14.035: INFO: Got endpoints: latency-svc-nnmtv [1.945150603s]
Dec 23 14:58:14.058: INFO: Created: latency-svc-7g8px
Dec 23 14:58:14.065: INFO: Got endpoints: latency-svc-7g8px [1.815327642s]
Dec 23 14:58:14.121: INFO: Created: latency-svc-df4g6
Dec 23 14:58:14.248: INFO: Created: latency-svc-62qhn
Dec 23 14:58:14.255: INFO: Got endpoints: latency-svc-df4g6 [1.972224329s]
Dec 23 14:58:14.260: INFO: Got endpoints: latency-svc-62qhn [1.827757754s]
Dec 23 14:58:14.392: INFO: Created: latency-svc-c2kbn
Dec 23 14:58:14.439: INFO: Got endpoints: latency-svc-c2kbn [1.800035315s]
Dec 23 14:58:14.459: INFO: Created: latency-svc-rjsz4
Dec 23 14:58:14.570: INFO: Got endpoints: latency-svc-rjsz4 [1.915534154s]
Dec 23 14:58:14.627: INFO: Created: latency-svc-zvbcr
Dec 23 14:58:14.629: INFO: Got endpoints: latency-svc-zvbcr [1.868483475s]
Dec 23 14:58:14.751: INFO: Created: latency-svc-9hbxl
Dec 23 14:58:14.755: INFO: Got endpoints: latency-svc-9hbxl [1.96578396s]
Dec 23 14:58:14.803: INFO: Created: latency-svc-fhsm4
Dec 23 14:58:14.925: INFO: Got endpoints: latency-svc-fhsm4 [2.084079447s]
Dec 23 14:58:14.931: INFO: Created: latency-svc-84b75
Dec 23 14:58:14.970: INFO: Created: latency-svc-gsv6s
Dec 23 14:58:14.973: INFO: Got endpoints: latency-svc-84b75 [2.008832961s]
Dec 23 14:58:14.979: INFO: Got endpoints: latency-svc-gsv6s [1.943464778s]
Dec 23 14:58:15.028: INFO: Created: latency-svc-9884f
Dec 23 14:58:15.134: INFO: Got endpoints: latency-svc-9884f [1.514011355s]
Dec 23 14:58:15.140: INFO: Created: latency-svc-k9nw6
Dec 23 14:58:15.196: INFO: Got endpoints: latency-svc-k9nw6 [1.535405478s]
Dec 23 14:58:15.200: INFO: Created: latency-svc-8wxd8
Dec 23 14:58:15.219: INFO: Got endpoints: latency-svc-8wxd8 [1.434364872s]
Dec 23 14:58:15.346: INFO: Created: latency-svc-c26tr
Dec 23 14:58:15.371: INFO: Got endpoints: latency-svc-c26tr [1.538884407s]
Dec 23 14:58:15.418: INFO: Created: latency-svc-v96hm
Dec 23 14:58:15.588: INFO: Created: latency-svc-mmqqm
Dec 23 14:58:15.588: INFO: Got endpoints: latency-svc-v96hm [1.552861181s]
Dec 23 14:58:15.595: INFO: Got endpoints: latency-svc-mmqqm [1.530181333s]
Dec 23 14:58:15.635: INFO: Created: latency-svc-l6lcn
Dec 23 14:58:15.647: INFO: Got endpoints: latency-svc-l6lcn [1.390918333s]
Dec 23 14:58:15.786: INFO: Created: latency-svc-d598z
Dec 23 14:58:15.790: INFO: Got endpoints: latency-svc-d598z [1.530121219s]
Dec 23 14:58:15.855: INFO: Created: latency-svc-lgw4l
Dec 23 14:58:15.857: INFO: Got endpoints: latency-svc-lgw4l [1.416655243s]
Dec 23 14:58:16.005: INFO: Created: latency-svc-zl4kj
Dec 23 14:58:16.014: INFO: Got endpoints: latency-svc-zl4kj [1.443703762s]
Dec 23 14:58:16.172: INFO: Created: latency-svc-z4jfs
Dec 23 14:58:16.213: INFO: Got endpoints: latency-svc-z4jfs [1.584299232s]
Dec 23 14:58:16.218: INFO: Created: latency-svc-8sqjw
Dec 23 14:58:16.218: INFO: Got endpoints: latency-svc-8sqjw [1.462796431s]
Dec 23 14:58:16.261: INFO: Created: latency-svc-djgqw
Dec 23 14:58:16.266: INFO: Got endpoints: latency-svc-djgqw [1.340315111s]
Dec 23 14:58:16.460: INFO: Created: latency-svc-vpdnv
Dec 23 14:58:16.486: INFO: Got endpoints: latency-svc-vpdnv [1.513012897s]
Dec 23 14:58:16.531: INFO: Created: latency-svc-w2rz9
Dec 23 14:58:16.679: INFO: Got endpoints: latency-svc-w2rz9 [1.699100205s]
Dec 23 14:58:16.683: INFO: Created: latency-svc-k2bv7
Dec 23 14:58:16.695: INFO: Got endpoints: latency-svc-k2bv7 [1.560676963s]
Dec 23 14:58:16.754: INFO: Created: latency-svc-zmnp9
Dec 23 14:58:16.768: INFO: Got endpoints: latency-svc-zmnp9 [1.571223211s]
Dec 23 14:58:16.875: INFO: Created: latency-svc-nkchq
Dec 23 14:58:16.883: INFO: Got endpoints: latency-svc-nkchq [1.663259555s]
Dec 23 14:58:16.927: INFO: Created: latency-svc-clpbg
Dec 23 14:58:16.938: INFO: Got endpoints: latency-svc-clpbg [1.567095041s]
Dec 23 14:58:17.080: INFO: Created: latency-svc-9vbn6
Dec 23 14:58:17.083: INFO: Got endpoints: latency-svc-9vbn6 [1.495116048s]
Dec 23 14:58:17.133: INFO: Created: latency-svc-4cl69
Dec 23 14:58:17.136: INFO: Got endpoints: latency-svc-4cl69 [1.540766739s]
Dec 23 14:58:17.252: INFO: Created: latency-svc-htwsb
Dec 23 14:58:17.260: INFO: Got endpoints: latency-svc-htwsb [1.612391491s]
Dec 23 14:58:17.313: INFO: Created: latency-svc-s5rst
Dec 23 14:58:17.319: INFO: Got endpoints: latency-svc-s5rst [1.528669865s]
Dec 23 14:58:17.523: INFO: Created: latency-svc-77vhh
Dec 23 14:58:17.555: INFO: Got endpoints: latency-svc-77vhh [1.697944403s]
Dec 23 14:58:17.564: INFO: Created: latency-svc-dv7ql
Dec 23 14:58:17.570: INFO: Got endpoints: latency-svc-dv7ql [1.556422084s]
Dec 23 14:58:17.602: INFO: Created: latency-svc-6ftfm
Dec 23 14:58:17.722: INFO: Got endpoints: latency-svc-6ftfm [1.508360265s]
Dec 23 14:58:17.735: INFO: Created: latency-svc-rlp5p
Dec 23 14:58:17.744: INFO: Got endpoints: latency-svc-rlp5p [1.525542239s]
Dec 23 14:58:17.770: INFO: Created: latency-svc-pffkx
Dec 23 14:58:17.780: INFO: Got endpoints: latency-svc-pffkx [1.513958887s]
Dec 23 14:58:17.818: INFO: Created: latency-svc-74fk5
Dec 23 14:58:17.830: INFO: Got endpoints: latency-svc-74fk5 [1.343328026s]
Dec 23 14:58:18.002: INFO: Created: latency-svc-f6k2f
Dec 23 14:58:18.067: INFO: Got endpoints: latency-svc-f6k2f [1.387852106s]
Dec 23 14:58:18.071: INFO: Created: latency-svc-75wp7
Dec 23 14:58:18.143: INFO: Got endpoints: latency-svc-75wp7 [1.447219502s]
Dec 23 14:58:18.163: INFO: Created: latency-svc-g77x4
Dec 23 14:58:18.408: INFO: Got endpoints: latency-svc-g77x4 [1.640252625s]
Dec 23 14:58:18.499: INFO: Created: latency-svc-4r8d4
Dec 23 14:58:18.597: INFO: Got endpoints: latency-svc-4r8d4 [1.713770314s]
Dec 23 14:58:18.613: INFO: Created: latency-svc-55h8h
Dec 23 14:58:18.621: INFO: Got endpoints: latency-svc-55h8h [1.682394598s]
Dec 23 14:58:18.681: INFO: Created: latency-svc-hcrxn
Dec 23 14:58:18.788: INFO: Got endpoints: latency-svc-hcrxn [1.704153473s]
Dec 23 14:58:18.813: INFO: Created: latency-svc-f7sc5
Dec 23 14:58:18.815: INFO: Got endpoints: latency-svc-f7sc5 [1.678985825s]
Dec 23 14:58:18.873: INFO: Created: latency-svc-njqwk
Dec 23 14:58:19.012: INFO: Got endpoints: latency-svc-njqwk [1.751683821s]
Dec 23 14:58:19.101: INFO: Created: latency-svc-ssjdj
Dec 23 14:58:19.101: INFO: Created: latency-svc-d6drs
Dec 23 14:58:19.196: INFO: Got endpoints: latency-svc-ssjdj [1.640569895s]
Dec 23 14:58:19.198: INFO: Created: latency-svc-d5skx
Dec 23 14:58:19.198: INFO: Got endpoints: latency-svc-d6drs [1.879041279s]
Dec 23 14:58:19.208: INFO: Got endpoints: latency-svc-d5skx [1.637003876s]
Dec 23 14:58:19.257: INFO: Created: latency-svc-mkzf8
Dec 23 14:58:19.387: INFO: Got endpoints: latency-svc-mkzf8 [1.665038893s]
Dec 23 14:58:19.397: INFO: Created: latency-svc-ffj69
Dec 23 14:58:19.399: INFO: Got endpoints: latency-svc-ffj69 [1.655182752s]
Dec 23 14:58:19.467: INFO: Created: latency-svc-dlt4p
Dec 23 14:58:19.467: INFO: Got endpoints: latency-svc-dlt4p [1.687052394s]
Dec 23 14:58:19.606: INFO: Created: latency-svc-gr9v6
Dec 23 14:58:19.612: INFO: Got endpoints: latency-svc-gr9v6 [1.78163736s]
Dec 23 14:58:19.655: INFO: Created: latency-svc-6sr9b
Dec 23 14:58:19.680: INFO: Got endpoints: latency-svc-6sr9b [1.612318975s]
Dec 23 14:58:19.691: INFO: Created: latency-svc-rdwjn
Dec 23 14:58:19.792: INFO: Got endpoints: latency-svc-rdwjn [1.648787364s]
Dec 23 14:58:19.803: INFO: Created: latency-svc-dpcb5
Dec 23 14:58:19.804: INFO: Got endpoints: latency-svc-dpcb5 [1.395242198s]
Dec 23 14:58:19.838: INFO: Created: latency-svc-j6cbw
Dec 23 14:58:19.853: INFO: Got endpoints: latency-svc-j6cbw [1.25508255s]
Dec 23 14:58:19.895: INFO: Created: latency-svc-w6p5h
Dec 23 14:58:20.019: INFO: Got endpoints: latency-svc-w6p5h [1.398022863s]
Dec 23 14:58:20.043: INFO: Created: latency-svc-6vvmk
Dec 23 14:58:20.055: INFO: Got endpoints: latency-svc-6vvmk [1.266139792s]
Dec 23 14:58:20.111: INFO: Created: latency-svc-x4lzw
Dec 23 14:58:20.117: INFO: Got endpoints: latency-svc-x4lzw [1.301349289s]
Dec 23 14:58:20.238: INFO: Created: latency-svc-jsnmm
Dec 23 14:58:20.239: INFO: Got endpoints: latency-svc-jsnmm [1.226744165s]
Dec 23 14:58:20.276: INFO: Created: latency-svc-lsbjf
Dec 23 14:58:20.295: INFO: Got endpoints: latency-svc-lsbjf [1.09632913s]
Dec 23 14:58:20.325: INFO: Created: latency-svc-fhbkv
Dec 23 14:58:20.490: INFO: Got endpoints: latency-svc-fhbkv [1.293456711s]
Dec 23 14:58:20.511: INFO: Created: latency-svc-h6f2p
Dec 23 14:58:20.536: INFO: Got endpoints: latency-svc-h6f2p [1.327645087s]
Dec 23 14:58:20.598: INFO: Created: latency-svc-fdd8r
Dec 23 14:58:20.704: INFO: Got endpoints: latency-svc-fdd8r [1.316629513s]
Dec 23 14:58:20.752: INFO: Created: latency-svc-lbcg8
Dec 23 14:58:20.760: INFO: Got endpoints: latency-svc-lbcg8 [1.36069856s]
Dec 23 14:58:20.788: INFO: Created: latency-svc-fcsrz
Dec 23 14:58:20.789: INFO: Got endpoints: latency-svc-fcsrz [1.321787435s]
Dec 23 14:58:20.908: INFO: Created: latency-svc-tsjlk
Dec 23 14:58:20.931: INFO: Got endpoints: latency-svc-tsjlk [1.318875643s]
Dec 23 14:58:21.005: INFO: Created: latency-svc-8d7z8
Dec 23 14:58:21.132: INFO: Got endpoints: latency-svc-8d7z8 [1.45197427s]
Dec 23 14:58:21.137: INFO: Created: latency-svc-hdn7w
Dec 23 14:58:21.177: INFO: Got endpoints: latency-svc-hdn7w [1.384300295s]
Dec 23 14:58:21.313: INFO: Created: latency-svc-vqhzb
Dec 23 14:58:21.323: INFO: Got endpoints: latency-svc-vqhzb [1.518601445s]
Dec 23 14:58:21.366: INFO: Created: latency-svc-lflwb
Dec 23 14:58:21.371: INFO: Got endpoints: latency-svc-lflwb [1.517105293s]
Dec 23 14:58:21.512: INFO: Created: latency-svc-2q8th
Dec 23 14:58:21.523: INFO: Got endpoints: latency-svc-2q8th [1.50353055s]
Dec 23 14:58:21.583: INFO: Created: latency-svc-jnfmf
Dec 23 14:58:21.596: INFO: Got endpoints: latency-svc-jnfmf [1.541129962s]
Dec 23 14:58:21.707: INFO: Created: latency-svc-gps4n
Dec 23 14:58:21.723: INFO: Got endpoints: latency-svc-gps4n [1.605575305s]
Dec 23 14:58:21.763: INFO: Created: latency-svc-xpskc
Dec 23 14:58:21.774: INFO: Got endpoints: latency-svc-xpskc [1.534670937s]
Dec 23 14:58:21.871: INFO: Created: latency-svc-hmpxd
Dec 23 14:58:21.884: INFO: Got endpoints: latency-svc-hmpxd [1.588186829s]
Dec 23 14:58:21.937: INFO: Created: latency-svc-x99mt
Dec 23 14:58:21.943: INFO: Got endpoints: latency-svc-x99mt [1.452495837s]
Dec 23 14:58:22.076: INFO: Created: latency-svc-79mv8
Dec 23 14:58:22.107: INFO: Got endpoints: latency-svc-79mv8 [1.5706569s]
Dec 23 14:58:22.281: INFO: Created: latency-svc-brln5
Dec 23 14:58:22.298: INFO: Got endpoints: latency-svc-brln5 [1.593018485s]
Dec 23 14:58:22.348: INFO: Created: latency-svc-qqdgp
Dec 23 14:58:22.377: INFO: Got endpoints: latency-svc-qqdgp [1.616755997s]
Dec 23 14:58:22.525: INFO: Created: latency-svc-hch68
Dec 23 14:58:22.597: INFO: Got endpoints: latency-svc-hch68 [1.807529559s]
Dec 23 14:58:22.629: INFO: Created: latency-svc-k9dxq
Dec 23 14:58:22.707: INFO: Got endpoints: latency-svc-k9dxq [1.7756408s]
Dec 23 14:58:22.739: INFO: Created: latency-svc-jr2tf
Dec 23 14:58:22.753: INFO: Got endpoints: latency-svc-jr2tf [1.620173388s]
Dec 23 14:58:22.880: INFO: Created: latency-svc-q987c
Dec 23 14:58:22.927: INFO: Got endpoints: latency-svc-q987c [1.749597901s]
Dec 23 14:58:22.930: INFO: Created: latency-svc-tc5b7
Dec 23 14:58:22.940: INFO: Got endpoints: latency-svc-tc5b7 [1.61673596s]
Dec 23 14:58:23.058: INFO: Created: latency-svc-mqf84
Dec 23 14:58:23.069: INFO: Got endpoints: latency-svc-mqf84 [1.698483066s]
Dec 23 14:58:23.120: INFO: Created: latency-svc-75jpv
Dec 23 14:58:23.134: INFO: Got endpoints: latency-svc-75jpv [1.610130725s]
Dec 23 14:58:23.761: INFO: Created: latency-svc-qrz6k
Dec 23 14:58:23.773: INFO: Got endpoints: latency-svc-qrz6k [2.176169025s]
Dec 23 14:58:23.913: INFO: Created: latency-svc-r8zth
Dec 23 14:58:23.924: INFO: Got endpoints: latency-svc-r8zth [2.200671441s]
Dec 23 14:58:23.978: INFO: Created: latency-svc-cfwdh
Dec 23 14:58:24.095: INFO: Created: latency-svc-hwd5x
Dec 23 14:58:24.096: INFO: Got endpoints: latency-svc-cfwdh [2.321619594s]
Dec 23 14:58:24.155: INFO: Got endpoints: latency-svc-hwd5x [2.270191709s]
Dec 23 14:58:24.162: INFO: Created: latency-svc-r7qgc
Dec 23 14:58:24.300: INFO: Got endpoints: latency-svc-r7qgc [2.356428736s]
Dec 23 14:58:24.350: INFO: Created: latency-svc-x9cg4
Dec 23 14:58:24.510: INFO: Created: latency-svc-p7xh7
Dec 23 14:58:24.510: INFO: Got endpoints: latency-svc-p7xh7 [2.211878425s]
Dec 23 14:58:24.510: INFO: Got endpoints: latency-svc-x9cg4 [2.402872079s]
Dec 23 14:58:24.573: INFO: Created: latency-svc-62l5k
Dec 23 14:58:24.700: INFO: Got endpoints: latency-svc-62l5k [2.322199267s]
Dec 23 14:58:24.724: INFO: Created: latency-svc-6gkcl
Dec 23 14:58:24.728: INFO: Got endpoints: latency-svc-6gkcl [2.130191564s]
Dec 23 14:58:24.809: INFO: Created: latency-svc-kbgmk
Dec 23 14:58:24.933: INFO: Got endpoints: latency-svc-kbgmk [2.224980041s]
Dec 23 14:58:24.961: INFO: Created: latency-svc-lpq46
Dec 23 14:58:24.981: INFO: Got endpoints: latency-svc-lpq46 [2.227628422s]
Dec 23 14:58:25.026: INFO: Created: latency-svc-cqvv7
Dec 23 14:58:25.137: INFO: Got endpoints: latency-svc-cqvv7 [2.208179321s]
Dec 23 14:58:25.146: INFO: Created: latency-svc-4pz46
Dec 23 14:58:25.149: INFO: Got endpoints: latency-svc-4pz46 [2.209057114s]
Dec 23 14:58:25.192: INFO: Created: latency-svc-crcqb
Dec 23 14:58:25.284: INFO: Got endpoints: latency-svc-crcqb [2.214059779s]
Dec 23 14:58:25.310: INFO: Created: latency-svc-wnlxr
Dec 23 14:58:25.321: INFO: Got endpoints: latency-svc-wnlxr [2.186324912s]
Dec 23 14:58:25.375: INFO: Created: latency-svc-w76vv
Dec 23 14:58:25.387: INFO: Got endpoints: latency-svc-w76vv [1.613808632s]
Dec 23 14:58:25.526: INFO: Created: latency-svc-ftfsg
Dec 23 14:58:25.543: INFO: Got endpoints: latency-svc-ftfsg [1.618163513s]
Dec 23 14:58:25.585: INFO: Created: latency-svc-9dzpx
Dec 23 14:58:25.636: INFO: Got endpoints: latency-svc-9dzpx [1.540130534s]
Dec 23 14:58:25.663: INFO: Created: latency-svc-lbpz5
Dec 23 14:58:25.671: INFO: Got endpoints: latency-svc-lbpz5 [1.516121156s]
Dec 23 14:58:25.707: INFO: Created: latency-svc-mtkd8
Dec 23 14:58:25.711: INFO: Got endpoints: latency-svc-mtkd8 [1.410450742s]
Dec 23 14:58:25.789: INFO: Created: latency-svc-b8xjc
Dec 23 14:58:25.830: INFO: Got endpoints: latency-svc-b8xjc [1.31979648s]
Dec 23 14:58:25.834: INFO: Created: latency-svc-dskwq
Dec 23 14:58:25.842: INFO: Got endpoints: latency-svc-dskwq [1.329648402s]
Dec 23 14:58:25.981: INFO: Created: latency-svc-t9xsd
Dec 23 14:58:25.991: INFO: Got endpoints: latency-svc-t9xsd [1.290128489s]
Dec 23 14:58:26.028: INFO: Created: latency-svc-2xzng
Dec 23 14:58:26.054: INFO: Got endpoints: latency-svc-2xzng [1.325909937s]
Dec 23 14:58:26.164: INFO: Created: latency-svc-48f9q
Dec 23 14:58:26.185: INFO: Got endpoints: latency-svc-48f9q [1.251125889s]
Dec 23 14:58:26.210: INFO: Created: latency-svc-qpnnp
Dec 23 14:58:26.217: INFO: Got endpoints: latency-svc-qpnnp [1.234896411s]
Dec 23 14:58:26.429: INFO: Created: latency-svc-l45zm
Dec 23 14:58:26.445: INFO: Got endpoints: latency-svc-l45zm [1.307878668s]
Dec 23 14:58:26.509: INFO: Created: latency-svc-f4gmf
Dec 23 14:58:26.513: INFO: Got endpoints: latency-svc-f4gmf [1.363154913s]
Dec 23 14:58:26.628: INFO: Created: latency-svc-kq7nm
Dec 23 14:58:26.641: INFO: Got endpoints: latency-svc-kq7nm [1.356946328s]
Dec 23 14:58:26.803: INFO: Created: latency-svc-x6rsf
Dec 23 14:58:26.803: INFO: Got endpoints: latency-svc-x6rsf [1.482438829s]
Dec 23 14:58:26.864: INFO: Created: latency-svc-dd56l
Dec 23 14:58:26.876: INFO: Got endpoints: latency-svc-dd56l [1.488847836s]
Dec 23 14:58:26.995: INFO: Created: latency-svc-t86k9
Dec 23 14:58:27.042: INFO: Got endpoints: latency-svc-t86k9 [1.498528709s]
Dec 23 14:58:27.045: INFO: Created: latency-svc-4flfr
Dec 23 14:58:27.055: INFO: Got endpoints: latency-svc-4flfr [1.418213941s]
Dec 23 14:58:27.055: INFO: Latencies: [171.4574ms 220.287761ms 334.040083ms 407.886517ms 408.996252ms 536.673438ms 723.765202ms 783.372419ms 805.686392ms 1.004555895s 1.09632913s 1.226744165s 1.230633991s 1.23158368s 1.231849722s 1.234896411s 1.238461845s 1.241701524s 1.251125889s 1.251619196s 1.25508255s 1.260112611s 1.260301311s 1.266139792s 1.279071617s 1.282029441s 1.28901119s 1.290128489s 1.293456711s 1.293656831s 1.296613506s 1.301349289s 1.304994423s 1.307878668s 1.314875291s 1.314879501s 1.315262688s 1.316629513s 1.317566762s 1.318875643s 1.31979648s 1.321787435s 1.322669021s 1.325650668s 1.325909937s 1.327645087s 1.329648402s 1.333922694s 1.334439456s 1.340315111s 1.343328026s 1.347886409s 1.356946328s 1.360485355s 1.36069856s 1.363154913s 1.365261208s 1.378592436s 1.384300295s 1.387852106s 1.390918333s 1.394193178s 1.395242198s 1.395327849s 1.398022863s 1.408181761s 1.408623982s 1.410450742s 1.416655243s 1.417115718s 1.418213941s 1.422302865s 1.423202885s 1.434364872s 1.443703762s 1.444251796s 1.44544338s 1.447219502s 1.448704032s 1.45197427s 1.452495837s 1.455847953s 1.457131378s 1.462796431s 1.482438829s 1.488847836s 1.495116048s 1.498528709s 1.502906294s 1.50353055s 1.508360265s 1.513012897s 1.513958887s 1.514011355s 1.516121156s 1.517105293s 1.518601445s 1.525542239s 1.528669865s 1.530121219s 1.530181333s 1.534670937s 1.535405478s 1.538884407s 1.540130534s 1.540766739s 1.541129962s 1.552861181s 1.556422084s 1.560676963s 1.567095041s 1.5706569s 1.571223211s 1.571447939s 1.584299232s 1.588186829s 1.593018485s 1.605575305s 1.610130725s 1.612318975s 1.612391491s 1.613808632s 1.61673596s 1.616755997s 1.618163513s 1.620173388s 1.63646785s 1.637003876s 1.640252625s 1.640569895s 1.648787364s 1.655182752s 1.663259555s 1.665038893s 1.678985825s 1.682394598s 1.687052394s 1.688856733s 1.697944403s 1.698483066s 1.699100205s 1.704153473s 1.713770314s 1.74753775s 1.749597901s 1.751683821s 1.762742645s 1.767188472s 1.768402794s 1.7756408s 1.78163736s 1.800035315s 1.807529559s 1.815327642s 1.827757754s 1.844795634s 1.848761255s 1.853397812s 1.860952092s 1.868483475s 1.879041279s 1.915534154s 1.925715046s 1.943464778s 1.945150603s 1.963772768s 1.96578396s 1.972224329s 1.990914499s 2.008832961s 2.026452681s 2.058266961s 2.084079447s 2.115587777s 2.130191564s 2.176169025s 2.186324912s 2.189538919s 2.200671441s 2.20760658s 2.208179321s 2.209057114s 2.211878425s 2.214059779s 2.224980041s 2.227628422s 2.265065401s 2.270191709s 2.275424821s 2.298003402s 2.301065577s 2.321619594s 2.322199267s 2.332927251s 2.342309221s 2.342754405s 2.345183735s 2.356428736s 2.402872079s 2.428646478s]
Dec 23 14:58:27.055: INFO: 50 %ile: 1.530181333s
Dec 23 14:58:27.055: INFO: 90 %ile: 2.208179321s
Dec 23 14:58:27.055: INFO: 99 %ile: 2.402872079s
Dec 23 14:58:27.055: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:58:27.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-7777" for this suite.
Dec 23 14:59:05.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:59:05.325: INFO: namespace svc-latency-7777 deletion completed in 38.198445891s

• [SLOW TEST:68.356 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:59:05.326: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-92317466-267a-4eef-8696-fb5d75cfeb9d
STEP: Creating a pod to test consume configMaps
Dec 23 14:59:05.569: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-597df0b8-3d30-4bc1-a208-84fcd8cf1705" in namespace "projected-45" to be "success or failure"
Dec 23 14:59:05.590: INFO: Pod "pod-projected-configmaps-597df0b8-3d30-4bc1-a208-84fcd8cf1705": Phase="Pending", Reason="", readiness=false. Elapsed: 20.234762ms
Dec 23 14:59:07.604: INFO: Pod "pod-projected-configmaps-597df0b8-3d30-4bc1-a208-84fcd8cf1705": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034114497s
Dec 23 14:59:09.620: INFO: Pod "pod-projected-configmaps-597df0b8-3d30-4bc1-a208-84fcd8cf1705": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050748508s
Dec 23 14:59:11.629: INFO: Pod "pod-projected-configmaps-597df0b8-3d30-4bc1-a208-84fcd8cf1705": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058823589s
Dec 23 14:59:13.647: INFO: Pod "pod-projected-configmaps-597df0b8-3d30-4bc1-a208-84fcd8cf1705": Phase="Pending", Reason="", readiness=false. Elapsed: 8.077018677s
Dec 23 14:59:15.712: INFO: Pod "pod-projected-configmaps-597df0b8-3d30-4bc1-a208-84fcd8cf1705": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.142046789s
STEP: Saw pod success
Dec 23 14:59:15.712: INFO: Pod "pod-projected-configmaps-597df0b8-3d30-4bc1-a208-84fcd8cf1705" satisfied condition "success or failure"
Dec 23 14:59:15.726: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-597df0b8-3d30-4bc1-a208-84fcd8cf1705 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 23 14:59:16.855: INFO: Waiting for pod pod-projected-configmaps-597df0b8-3d30-4bc1-a208-84fcd8cf1705 to disappear
Dec 23 14:59:16.931: INFO: Pod pod-projected-configmaps-597df0b8-3d30-4bc1-a208-84fcd8cf1705 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:59:16.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-45" for this suite.
Dec 23 14:59:22.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:59:23.107: INFO: namespace projected-45 deletion completed in 6.164717005s

• [SLOW TEST:17.781 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:59:23.108: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Dec 23 14:59:23.270: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 14:59:46.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-190" for this suite.
Dec 23 14:59:52.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 14:59:52.767: INFO: namespace pods-190 deletion completed in 6.187103485s

• [SLOW TEST:29.660 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 14:59:52.771: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-3aec1001-c4dc-4202-a2a7-a40e2ecc61d4
STEP: Creating secret with name s-test-opt-upd-0c2c155e-4bef-47b2-a93c-767c241371c0
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-3aec1001-c4dc-4202-a2a7-a40e2ecc61d4
STEP: Updating secret s-test-opt-upd-0c2c155e-4bef-47b2-a93c-767c241371c0
STEP: Creating secret with name s-test-opt-create-f512f1ca-3fd0-4fcb-9f09-1ac5e3e16adb
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 15:00:07.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4347" for this suite.
Dec 23 15:00:29.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 15:00:29.714: INFO: namespace projected-4347 deletion completed in 22.251965029s

• [SLOW TEST:36.943 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 15:00:29.714: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Dec 23 15:00:50.058: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-47 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 15:00:50.058: INFO: >>> kubeConfig: /root/.kube/config
Dec 23 15:00:50.571: INFO: Exec stderr: ""
Dec 23 15:00:50.572: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-47 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 15:00:50.572: INFO: >>> kubeConfig: /root/.kube/config
Dec 23 15:00:50.966: INFO: Exec stderr: ""
Dec 23 15:00:50.966: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-47 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 15:00:50.966: INFO: >>> kubeConfig: /root/.kube/config
Dec 23 15:00:51.372: INFO: Exec stderr: ""
Dec 23 15:00:51.373: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-47 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 15:00:51.373: INFO: >>> kubeConfig: /root/.kube/config
Dec 23 15:00:51.684: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Dec 23 15:00:51.685: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-47 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 15:00:51.685: INFO: >>> kubeConfig: /root/.kube/config
Dec 23 15:00:52.056: INFO: Exec stderr: ""
Dec 23 15:00:52.056: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-47 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 15:00:52.057: INFO: >>> kubeConfig: /root/.kube/config
Dec 23 15:00:52.397: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Dec 23 15:00:52.398: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-47 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 15:00:52.398: INFO: >>> kubeConfig: /root/.kube/config
Dec 23 15:00:52.738: INFO: Exec stderr: ""
Dec 23 15:00:52.739: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-47 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 15:00:52.739: INFO: >>> kubeConfig: /root/.kube/config
Dec 23 15:00:53.051: INFO: Exec stderr: ""
Dec 23 15:00:53.052: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-47 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 15:00:53.052: INFO: >>> kubeConfig: /root/.kube/config
Dec 23 15:00:53.328: INFO: Exec stderr: ""
Dec 23 15:00:53.329: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-47 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 15:00:53.329: INFO: >>> kubeConfig: /root/.kube/config
Dec 23 15:00:53.720: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 15:00:53.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-47" for this suite.
Dec 23 15:01:37.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 15:01:37.918: INFO: namespace e2e-kubelet-etc-hosts-47 deletion completed in 44.180877095s

• [SLOW TEST:68.204 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 15:01:37.919: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 23 15:01:38.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Dec 23 15:01:38.203: INFO: stderr: ""
Dec 23 15:01:38.203: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 15:01:38.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5212" for this suite.
Dec 23 15:01:44.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 15:01:44.550: INFO: namespace kubectl-5212 deletion completed in 6.241954727s

• [SLOW TEST:6.632 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 15:01:44.552: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Dec 23 15:01:44.718: INFO: Waiting up to 5m0s for pod "client-containers-98cd873b-db88-4891-aa6e-43c2b48745f8" in namespace "containers-8224" to be "success or failure"
Dec 23 15:01:44.724: INFO: Pod "client-containers-98cd873b-db88-4891-aa6e-43c2b48745f8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.684875ms
Dec 23 15:01:46.735: INFO: Pod "client-containers-98cd873b-db88-4891-aa6e-43c2b48745f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016030208s
Dec 23 15:01:48.745: INFO: Pod "client-containers-98cd873b-db88-4891-aa6e-43c2b48745f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026008785s
Dec 23 15:01:50.764: INFO: Pod "client-containers-98cd873b-db88-4891-aa6e-43c2b48745f8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045306913s
Dec 23 15:01:52.782: INFO: Pod "client-containers-98cd873b-db88-4891-aa6e-43c2b48745f8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063130071s
Dec 23 15:01:54.790: INFO: Pod "client-containers-98cd873b-db88-4891-aa6e-43c2b48745f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.071439034s
STEP: Saw pod success
Dec 23 15:01:54.790: INFO: Pod "client-containers-98cd873b-db88-4891-aa6e-43c2b48745f8" satisfied condition "success or failure"
Dec 23 15:01:54.796: INFO: Trying to get logs from node iruya-node pod client-containers-98cd873b-db88-4891-aa6e-43c2b48745f8 container test-container: 
STEP: delete the pod
Dec 23 15:01:54.910: INFO: Waiting for pod client-containers-98cd873b-db88-4891-aa6e-43c2b48745f8 to disappear
Dec 23 15:01:54.919: INFO: Pod client-containers-98cd873b-db88-4891-aa6e-43c2b48745f8 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 15:01:54.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8224" for this suite.
Dec 23 15:02:00.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 15:02:01.107: INFO: namespace containers-8224 deletion completed in 6.179623579s

• [SLOW TEST:16.555 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 23 15:02:01.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Dec 23 15:02:01.272: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 23 15:02:01.358: INFO: Waiting for terminating namespaces to be deleted...
Dec 23 15:02:01.362: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Dec 23 15:02:01.407: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Dec 23 15:02:01.407: INFO: 	Container weave ready: true, restart count 0
Dec 23 15:02:01.407: INFO: 	Container weave-npc ready: true, restart count 0
Dec 23 15:02:01.407: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Dec 23 15:02:01.407: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 23 15:02:01.407: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Dec 23 15:02:01.425: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Dec 23 15:02:01.425: INFO: 	Container kube-apiserver ready: true, restart count 0
Dec 23 15:02:01.425: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Dec 23 15:02:01.425: INFO: 	Container kube-scheduler ready: true, restart count 7
Dec 23 15:02:01.425: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 23 15:02:01.425: INFO: 	Container coredns ready: true, restart count 0
Dec 23 15:02:01.425: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Dec 23 15:02:01.425: INFO: 	Container etcd ready: true, restart count 0
Dec 23 15:02:01.425: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Dec 23 15:02:01.425: INFO: 	Container weave ready: true, restart count 0
Dec 23 15:02:01.425: INFO: 	Container weave-npc ready: true, restart count 0
Dec 23 15:02:01.425: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 23 15:02:01.425: INFO: 	Container coredns ready: true, restart count 0
Dec 23 15:02:01.425: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Dec 23 15:02:01.425: INFO: 	Container kube-controller-manager ready: true, restart count 10
Dec 23 15:02:01.425: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Dec 23 15:02:01.425: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15e30835c05703dd], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 23 15:02:02.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-8813" for this suite.
Dec 23 15:02:08.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 23 15:02:08.715: INFO: namespace sched-pred-8813 deletion completed in 6.239564312s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.606 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSDec 23 15:02:08.717: INFO: Running AfterSuite actions on all nodes
Dec 23 15:02:08.717: INFO: Running AfterSuite actions on node 1
Dec 23 15:02:08.717: INFO: Skipping dumping logs from cluster

Ran 215 of 4412 Specs in 7556.665 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped
PASS