I0216 12:55:58.617502 8 e2e.go:243] Starting e2e run "50cb5016-e60b-4227-8f03-b38718c02773" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1581857757 - Will randomize all specs Will run 215 of 4412 specs Feb 16 12:55:58.880: INFO: >>> kubeConfig: /root/.kube/config Feb 16 12:55:58.885: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 16 12:55:58.916: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 16 12:55:58.962: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 16 12:55:58.962: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 16 12:55:58.962: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 16 12:55:58.988: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 16 12:55:58.988: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 16 12:55:58.988: INFO: e2e test version: v1.15.7 Feb 16 12:55:58.993: INFO: kube-apiserver version: v1.15.1 SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 16 12:55:58.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred Feb 16 12:55:59.084: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Feb 16 12:55:59.086: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 16 12:55:59.091: INFO: Waiting for terminating namespaces to be deleted... Feb 16 12:55:59.093: INFO: Logging pods the kubelet thinks is on node iruya-node before test Feb 16 12:55:59.111: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Feb 16 12:55:59.111: INFO: Container weave ready: true, restart count 0 Feb 16 12:55:59.111: INFO: Container weave-npc ready: true, restart count 0 Feb 16 12:55:59.111: INFO: kube-bench-j7kcs from default started at 2020-02-11 06:42:30 +0000 UTC (1 container statuses recorded) Feb 16 12:55:59.111: INFO: Container kube-bench ready: false, restart count 0 Feb 16 12:55:59.111: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Feb 16 12:55:59.111: INFO: Container kube-proxy ready: true, restart count 0 Feb 16 12:55:59.111: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Feb 16 12:55:59.120: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Feb 16 12:55:59.120: INFO: Container kube-scheduler ready: true, restart count 13 Feb 16 12:55:59.120: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Feb 16 12:55:59.120: INFO: Container coredns ready: true, restart count 0 Feb 16 12:55:59.120: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Feb 16 12:55:59.120: INFO: Container etcd ready: true, restart count 0 Feb 16 12:55:59.120: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Feb 16 12:55:59.120: INFO: Container weave ready: true, restart count 0 Feb 16 12:55:59.120: INFO: Container weave-npc ready: true, restart count 0 Feb 16 12:55:59.120: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Feb 16 12:55:59.120: INFO: Container coredns ready: true, restart count 0 Feb 16 12:55:59.120: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Feb 16 12:55:59.120: INFO: Container kube-controller-manager ready: true, restart count 21 Feb 16 12:55:59.120: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Feb 16 12:55:59.120: INFO: Container kube-proxy ready: true, restart count 0 Feb 16 12:55:59.120: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Feb 16 12:55:59.120: INFO: Container kube-apiserver ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-deebaf3b-e2ff-46df-89fb-d0b13512f181 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-deebaf3b-e2ff-46df-89fb-d0b13512f181 off the node iruya-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-deebaf3b-e2ff-46df-89fb-d0b13512f181 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 16 12:56:23.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9092" for this suite. Feb 16 12:56:37.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 12:56:37.503: INFO: namespace sched-pred-9092 deletion completed in 14.127567397s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:38.510 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 16 12:56:37.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Feb 16 12:56:37.659: INFO: Waiting up to 5m0s for pod "client-containers-8bba7ada-8f67-4114-9689-322673e90072" in namespace "containers-4237" to be "success or failure" Feb 16 12:56:37.668: INFO: Pod "client-containers-8bba7ada-8f67-4114-9689-322673e90072": Phase="Pending", Reason="", readiness=false. Elapsed: 8.704644ms Feb 16 12:56:39.678: INFO: Pod "client-containers-8bba7ada-8f67-4114-9689-322673e90072": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018450842s Feb 16 12:56:41.697: INFO: Pod "client-containers-8bba7ada-8f67-4114-9689-322673e90072": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037402015s Feb 16 12:56:43.708: INFO: Pod "client-containers-8bba7ada-8f67-4114-9689-322673e90072": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048325455s Feb 16 12:56:45.718: INFO: Pod "client-containers-8bba7ada-8f67-4114-9689-322673e90072": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059224328s Feb 16 12:56:47.738: INFO: Pod "client-containers-8bba7ada-8f67-4114-9689-322673e90072": Phase="Pending", Reason="", readiness=false. Elapsed: 10.079050426s Feb 16 12:56:49.748: INFO: Pod "client-containers-8bba7ada-8f67-4114-9689-322673e90072": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.08832346s STEP: Saw pod success Feb 16 12:56:49.748: INFO: Pod "client-containers-8bba7ada-8f67-4114-9689-322673e90072" satisfied condition "success or failure" Feb 16 12:56:49.752: INFO: Trying to get logs from node iruya-node pod client-containers-8bba7ada-8f67-4114-9689-322673e90072 container test-container: STEP: delete the pod Feb 16 12:56:49.852: INFO: Waiting for pod client-containers-8bba7ada-8f67-4114-9689-322673e90072 to disappear Feb 16 12:56:49.867: INFO: Pod client-containers-8bba7ada-8f67-4114-9689-322673e90072 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 16 12:56:49.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4237" for this suite. Feb 16 12:56:55.933: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 12:56:56.045: INFO: namespace containers-4237 deletion completed in 6.169284356s • [SLOW TEST:18.542 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 16 12:56:56.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 16 12:57:02.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7700" for this suite. Feb 16 12:57:09.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 12:57:09.259: INFO: namespace namespaces-7700 deletion completed in 6.527324497s STEP: Destroying namespace "nsdeletetest-6906" for this suite. Feb 16 12:57:09.261: INFO: Namespace nsdeletetest-6906 was already deleted STEP: Destroying namespace "nsdeletetest-5221" for this suite. Feb 16 12:57:15.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 12:57:15.398: INFO: namespace nsdeletetest-5221 deletion completed in 6.136945689s • [SLOW TEST:19.353 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 16 12:57:15.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 16 12:57:15.525: INFO: Create a RollingUpdate DaemonSet Feb 16 12:57:15.533: INFO: Check that daemon pods launch on every node of the cluster Feb 16 12:57:15.568: INFO: Number of nodes with available pods: 0 Feb 16 12:57:15.569: INFO: Node iruya-node is running more than one daemon pod Feb 16 12:57:16.617: INFO: Number of nodes with available pods: 0 Feb 16 12:57:16.617: INFO: Node iruya-node is running more than one daemon pod Feb 16 12:57:17.847: INFO: Number of nodes with available pods: 0 Feb 16 12:57:17.847: INFO: Node iruya-node is running more than one daemon pod Feb 16 12:57:18.599: INFO: Number of nodes with available pods: 0 Feb 16 12:57:18.600: INFO: Node iruya-node is running more than one daemon pod Feb 16 12:57:19.589: INFO: Number of nodes with available pods: 0 Feb 16 12:57:19.589: INFO: Node iruya-node is running more than one daemon pod Feb 16 12:57:20.619: INFO: Number of nodes with available pods: 0 Feb 16 12:57:20.619: INFO: Node iruya-node is running more than one daemon pod Feb 16 12:57:21.581: INFO: Number of nodes with available pods: 0 Feb 16 12:57:21.582: INFO: Node iruya-node is running more than one daemon pod Feb 16 12:57:23.597: INFO: Number of nodes with available pods: 0 Feb 16 12:57:23.597: INFO: Node iruya-node is running more than one daemon pod Feb 16 12:57:25.909: INFO: Number of nodes with available pods: 0 Feb 16 12:57:25.909: INFO: Node iruya-node is running more than one daemon pod Feb 16 12:57:26.591: INFO: Number of nodes with available pods: 1 Feb 16 12:57:26.591: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 16 12:57:27.587: INFO: Number of nodes with available pods: 1 Feb 16 12:57:27.587: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 16 12:57:28.593: INFO: Number of nodes with available pods: 2 Feb 16 12:57:28.594: INFO: Number of running nodes: 2, number of available pods: 2 Feb 16 12:57:28.594: INFO: Update the DaemonSet to trigger a rollout Feb 16 12:57:28.611: INFO: Updating DaemonSet daemon-set Feb 16 12:57:39.557: INFO: Roll back the DaemonSet before rollout is complete Feb 16 12:57:39.572: INFO: Updating DaemonSet daemon-set Feb 16 12:57:39.572: INFO: Make sure DaemonSet rollback is complete Feb 16 12:57:39.895: INFO: Wrong image for pod: daemon-set-9446q. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Feb 16 12:57:39.895: INFO: Pod daemon-set-9446q is not available Feb 16 12:57:41.280: INFO: Wrong image for pod: daemon-set-9446q. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Feb 16 12:57:41.280: INFO: Pod daemon-set-9446q is not available Feb 16 12:57:41.918: INFO: Wrong image for pod: daemon-set-9446q. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Feb 16 12:57:41.918: INFO: Pod daemon-set-9446q is not available Feb 16 12:57:42.908: INFO: Wrong image for pod: daemon-set-9446q. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Feb 16 12:57:42.908: INFO: Pod daemon-set-9446q is not available Feb 16 12:57:43.918: INFO: Wrong image for pod: daemon-set-9446q. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Feb 16 12:57:43.918: INFO: Pod daemon-set-9446q is not available Feb 16 12:57:46.140: INFO: Wrong image for pod: daemon-set-9446q. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Feb 16 12:57:46.140: INFO: Pod daemon-set-9446q is not available Feb 16 12:57:46.933: INFO: Wrong image for pod: daemon-set-9446q. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Feb 16 12:57:46.933: INFO: Pod daemon-set-9446q is not available Feb 16 12:57:47.915: INFO: Pod daemon-set-55nch is not available [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6666, will wait for the garbage collector to delete the pods Feb 16 12:57:48.023: INFO: Deleting DaemonSet.extensions daemon-set took: 16.541271ms Feb 16 12:57:48.623: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.458293ms Feb 16 12:57:55.060: INFO: Number of nodes with available pods: 0 Feb 16 12:57:55.061: INFO: Number of running nodes: 0, number of available pods: 0 Feb 16 12:57:55.066: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6666/daemonsets","resourceVersion":"24569591"},"items":null} Feb 16 12:57:55.070: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6666/pods","resourceVersion":"24569591"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 16 12:57:55.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6666" for this suite. Feb 16 12:58:01.115: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 12:58:01.213: INFO: namespace daemonsets-6666 deletion completed in 6.127871333s • [SLOW TEST:45.815 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 16 12:58:01.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 16 12:58:01.322: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Feb 16 12:58:04.391: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 16 12:58:04.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5668" for this suite. Feb 16 12:58:19.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 12:58:19.359: INFO: namespace replication-controller-5668 deletion completed in 14.382103352s • [SLOW TEST:18.145 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 16 12:58:19.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-09d56567-4ef7-4705-b84d-163ac921e0bb STEP: Creating a pod to test consume configMaps Feb 16 12:58:19.523: INFO: Waiting up to 5m0s for pod "pod-configmaps-aed9c82a-bd59-4e30-bfc8-2d4b976eb81c" in namespace "configmap-2687" to be "success or failure" Feb 16 12:58:19.529: INFO: Pod "pod-configmaps-aed9c82a-bd59-4e30-bfc8-2d4b976eb81c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.04223ms Feb 16 12:58:21.535: INFO: Pod "pod-configmaps-aed9c82a-bd59-4e30-bfc8-2d4b976eb81c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011151756s Feb 16 12:58:23.546: INFO: Pod "pod-configmaps-aed9c82a-bd59-4e30-bfc8-2d4b976eb81c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022587876s Feb 16 12:58:25.552: INFO: Pod "pod-configmaps-aed9c82a-bd59-4e30-bfc8-2d4b976eb81c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028091125s Feb 16 12:58:27.562: INFO: Pod "pod-configmaps-aed9c82a-bd59-4e30-bfc8-2d4b976eb81c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.03810011s Feb 16 12:58:29.571: INFO: Pod "pod-configmaps-aed9c82a-bd59-4e30-bfc8-2d4b976eb81c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.047871869s Feb 16 12:58:31.580: INFO: Pod "pod-configmaps-aed9c82a-bd59-4e30-bfc8-2d4b976eb81c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.056854184s STEP: Saw pod success Feb 16 12:58:31.580: INFO: Pod "pod-configmaps-aed9c82a-bd59-4e30-bfc8-2d4b976eb81c" satisfied condition "success or failure" Feb 16 12:58:31.584: INFO: Trying to get logs from node iruya-node pod pod-configmaps-aed9c82a-bd59-4e30-bfc8-2d4b976eb81c container configmap-volume-test: STEP: delete the pod Feb 16 12:58:31.630: INFO: Waiting for pod pod-configmaps-aed9c82a-bd59-4e30-bfc8-2d4b976eb81c to disappear Feb 16 12:58:31.635: INFO: Pod pod-configmaps-aed9c82a-bd59-4e30-bfc8-2d4b976eb81c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 16 12:58:31.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2687" for this suite. Feb 16 12:58:37.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 12:58:37.772: INFO: namespace configmap-2687 deletion completed in 6.132083612s • [SLOW TEST:18.411 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 16 12:58:37.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-93b1b0f2-90db-47aa-8b7e-7f65ea124911 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 16 12:58:37.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-226" for this suite. Feb 16 12:58:43.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 12:58:43.995: INFO: namespace secrets-226 deletion completed in 6.136715643s • [SLOW TEST:6.223 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 16 12:58:43.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Feb 16 12:58:53.661: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 16 12:58:53.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3687" for this suite. Feb 16 12:59:34.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 12:59:34.153: INFO: namespace replicaset-3687 deletion completed in 40.311904516s • [SLOW TEST:50.158 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 16 12:59:34.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-fb116e03-a901-4c1e-8a7f-6d60e263d8f5 in namespace container-probe-7051 Feb 16 12:59:42.343: INFO: Started pod busybox-fb116e03-a901-4c1e-8a7f-6d60e263d8f5 in namespace container-probe-7051 STEP: checking the pod's current state and verifying that restartCount is present Feb 16 12:59:42.349: INFO: Initial restart count of pod busybox-fb116e03-a901-4c1e-8a7f-6d60e263d8f5 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 16 13:03:43.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7051" for this suite. Feb 16 13:03:49.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 13:03:49.311: INFO: namespace container-probe-7051 deletion completed in 6.14954969s • [SLOW TEST:255.158 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 16 13:03:49.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-84464a83-6ac6-4db6-be8b-4fca303e198f in namespace container-probe-5574 Feb 16 13:03:59.503: INFO: Started pod liveness-84464a83-6ac6-4db6-be8b-4fca303e198f in namespace container-probe-5574 STEP: checking the pod's current state and verifying that restartCount is present Feb 16 13:03:59.512: INFO: Initial restart count of pod liveness-84464a83-6ac6-4db6-be8b-4fca303e198f is 0 Feb 16 13:04:23.651: INFO: Restart count of pod container-probe-5574/liveness-84464a83-6ac6-4db6-be8b-4fca303e198f is now 1 (24.138668544s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 16 13:04:23.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5574" for this suite. Feb 16 13:04:29.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 13:04:29.958: INFO: namespace container-probe-5574 deletion completed in 6.245407192s • [SLOW TEST:40.646 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 16 13:04:29.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Feb 16 13:04:40.719: INFO: Successfully updated pod "annotationupdate2ccb73a9-4c4b-48db-8c12-4d4be2c8ac47" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 16 13:04:42.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4334" for this suite. Feb 16 13:05:04.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 13:05:04.944: INFO: namespace projected-4334 deletion completed in 22.154122688s • [SLOW TEST:34.986 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 16 13:05:04.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Feb 16 13:05:05.030: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 16 13:05:05.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2930" for this suite. Feb 16 13:05:11.138: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 13:05:11.217: INFO: namespace kubectl-2930 deletion completed in 6.106180945s • [SLOW TEST:6.272 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 16 13:05:11.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 16 13:05:11.342: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Feb 16 13:05:16.362: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 16 13:05:24.382: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Feb 16 13:05:24.484: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-4272,SelfLink:/apis/apps/v1/namespaces/deployment-4272/deployments/test-cleanup-deployment,UID:836df877-fd5c-4f81-86c9-8b8b9e43b481,ResourceVersion:24570455,Generation:1,CreationTimestamp:2020-02-16 13:05:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Feb 16 13:05:24.515: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-4272,SelfLink:/apis/apps/v1/namespaces/deployment-4272/replicasets/test-cleanup-deployment-55bbcbc84c,UID:7ff3cfbd-77d6-4131-9e9e-bdb627022dc9,ResourceVersion:24570461,Generation:1,CreationTimestamp:2020-02-16 13:05:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 836df877-fd5c-4f81-86c9-8b8b9e43b481 0xc0026b6a87 0xc0026b6a88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 16 13:05:24.515: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Feb 16 13:05:24.515: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-4272,SelfLink:/apis/apps/v1/namespaces/deployment-4272/replicasets/test-cleanup-controller,UID:8d6b7112-5183-4f1e-afeb-3679aa957616,ResourceVersion:24570456,Generation:1,CreationTimestamp:2020-02-16 13:05:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 836df877-fd5c-4f81-86c9-8b8b9e43b481 0xc0026b69b7 0xc0026b69b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Feb 16 13:05:24.572: INFO: Pod "test-cleanup-controller-6c7sz" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-6c7sz,GenerateName:test-cleanup-controller-,Namespace:deployment-4272,SelfLink:/api/v1/namespaces/deployment-4272/pods/test-cleanup-controller-6c7sz,UID:bb7bf651-9d3a-439a-b79c-c662379fc657,ResourceVersion:24570451,Generation:0,CreationTimestamp:2020-02-16 13:05:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 8d6b7112-5183-4f1e-afeb-3679aa957616 0xc00168cd77 0xc00168cd78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lf598 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lf598,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lf598 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00168cdf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00168ce10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:05:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:05:22 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:05:22 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:05:11 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-16 13:05:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-16 13:05:21 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://daad924aa968d519fa6ffd6785bd932d876980de6503629021cec0dc6f20528d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 16 13:05:24.572: INFO: Pod "test-cleanup-deployment-55bbcbc84c-8sh62" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-8sh62,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-4272,SelfLink:/api/v1/namespaces/deployment-4272/pods/test-cleanup-deployment-55bbcbc84c-8sh62,UID:e8304f4f-5db8-4e34-b7c5-52ee029c5e84,ResourceVersion:24570463,Generation:0,CreationTimestamp:2020-02-16 13:05:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 7ff3cfbd-77d6-4131-9e9e-bdb627022dc9 0xc00168cef7 0xc00168cef8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lf598 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lf598,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-lf598 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00168cf70} {node.kubernetes.io/unreachable Exists NoExecute 0xc00168cf90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:05:24 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 16 13:05:24.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4272" for this suite. Feb 16 13:05:30.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 13:05:30.751: INFO: namespace deployment-4272 deletion completed in 6.134065141s • [SLOW TEST:19.535 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 16 13:05:30.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Feb 16 13:05:45.123: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Feb 16 13:06:00.215: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 16 13:06:00.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4381" for this suite. Feb 16 13:06:06.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 13:06:06.397: INFO: namespace pods-4381 deletion completed in 6.172040123s • [SLOW TEST:35.645 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 16 13:06:06.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Feb 16 13:06:06.542: INFO: Waiting up to 5m0s for pod "downward-api-158d6866-72b8-41b5-892c-9fce77e41974" in namespace "downward-api-7263" to be "success or failure" Feb 16 13:06:06.550: INFO: Pod "downward-api-158d6866-72b8-41b5-892c-9fce77e41974": Phase="Pending", Reason="", readiness=false. Elapsed: 8.295998ms Feb 16 13:06:08.565: INFO: Pod "downward-api-158d6866-72b8-41b5-892c-9fce77e41974": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022599528s Feb 16 13:06:10.580: INFO: Pod "downward-api-158d6866-72b8-41b5-892c-9fce77e41974": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03852899s Feb 16 13:06:12.601: INFO: Pod "downward-api-158d6866-72b8-41b5-892c-9fce77e41974": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059413225s Feb 16 13:06:14.627: INFO: Pod "downward-api-158d6866-72b8-41b5-892c-9fce77e41974": Phase="Pending", Reason="", readiness=false. Elapsed: 8.084835623s Feb 16 13:06:16.633: INFO: Pod "downward-api-158d6866-72b8-41b5-892c-9fce77e41974": Phase="Pending", Reason="", readiness=false. Elapsed: 10.091379974s Feb 16 13:06:18.639: INFO: Pod "downward-api-158d6866-72b8-41b5-892c-9fce77e41974": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.097609342s STEP: Saw pod success Feb 16 13:06:18.640: INFO: Pod "downward-api-158d6866-72b8-41b5-892c-9fce77e41974" satisfied condition "success or failure" Feb 16 13:06:18.643: INFO: Trying to get logs from node iruya-node pod downward-api-158d6866-72b8-41b5-892c-9fce77e41974 container dapi-container: STEP: delete the pod Feb 16 13:06:18.769: INFO: Waiting for pod downward-api-158d6866-72b8-41b5-892c-9fce77e41974 to disappear Feb 16 13:06:18.921: INFO: Pod downward-api-158d6866-72b8-41b5-892c-9fce77e41974 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 16 13:06:18.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7263" for this suite. Feb 16 13:06:24.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 13:06:25.062: INFO: namespace downward-api-7263 deletion completed in 6.131848753s • [SLOW TEST:18.665 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 16 13:06:25.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Feb 16 13:06:25.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Feb 16 13:06:25.369: INFO: stderr: "" Feb 16 13:06:25.370: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 16 13:06:25.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2751" for this suite. Feb 16 13:06:31.424: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 13:06:31.541: INFO: namespace kubectl-2751 deletion completed in 6.163252796s • [SLOW TEST:6.478 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 16 13:06:31.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-rl6w STEP: Creating a pod to test atomic-volume-subpath Feb 16 13:06:31.673: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-rl6w" in namespace "subpath-8178" to be "success or failure" Feb 16 13:06:31.682: INFO: Pod "pod-subpath-test-downwardapi-rl6w": Phase="Pending", Reason="", readiness=false. Elapsed: 9.662086ms Feb 16 13:06:33.696: INFO: Pod "pod-subpath-test-downwardapi-rl6w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022813272s Feb 16 13:06:35.702: INFO: Pod "pod-subpath-test-downwardapi-rl6w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028968213s Feb 16 13:06:37.711: INFO: Pod "pod-subpath-test-downwardapi-rl6w": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03819971s Feb 16 13:06:39.788: INFO: Pod "pod-subpath-test-downwardapi-rl6w": Phase="Pending", Reason="", readiness=false. Elapsed: 8.115407175s Feb 16 13:06:41.803: INFO: Pod "pod-subpath-test-downwardapi-rl6w": Phase="Running", Reason="", readiness=true. Elapsed: 10.130699828s Feb 16 13:06:43.812: INFO: Pod "pod-subpath-test-downwardapi-rl6w": Phase="Running", Reason="", readiness=true. Elapsed: 12.138849161s Feb 16 13:06:45.818: INFO: Pod "pod-subpath-test-downwardapi-rl6w": Phase="Running", Reason="", readiness=true. Elapsed: 14.14549121s Feb 16 13:06:47.831: INFO: Pod "pod-subpath-test-downwardapi-rl6w": Phase="Running", Reason="", readiness=true. Elapsed: 16.158465556s Feb 16 13:06:49.841: INFO: Pod "pod-subpath-test-downwardapi-rl6w": Phase="Running", Reason="", readiness=true. Elapsed: 18.168674459s Feb 16 13:06:51.854: INFO: Pod "pod-subpath-test-downwardapi-rl6w": Phase="Running", Reason="", readiness=true. Elapsed: 20.180972075s Feb 16 13:06:53.861: INFO: Pod "pod-subpath-test-downwardapi-rl6w": Phase="Running", Reason="", readiness=true. Elapsed: 22.188444922s Feb 16 13:06:55.903: INFO: Pod "pod-subpath-test-downwardapi-rl6w": Phase="Running", Reason="", readiness=true. Elapsed: 24.230403026s Feb 16 13:06:57.911: INFO: Pod "pod-subpath-test-downwardapi-rl6w": Phase="Running", Reason="", readiness=true. Elapsed: 26.237882843s Feb 16 13:06:59.924: INFO: Pod "pod-subpath-test-downwardapi-rl6w": Phase="Running", Reason="", readiness=true. Elapsed: 28.251605225s Feb 16 13:07:01.932: INFO: Pod "pod-subpath-test-downwardapi-rl6w": Phase="Running", Reason="", readiness=true. Elapsed: 30.259485949s Feb 16 13:07:03.940: INFO: Pod "pod-subpath-test-downwardapi-rl6w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.2676861s STEP: Saw pod success Feb 16 13:07:03.941: INFO: Pod "pod-subpath-test-downwardapi-rl6w" satisfied condition "success or failure" Feb 16 13:07:03.943: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-rl6w container test-container-subpath-downwardapi-rl6w: STEP: delete the pod Feb 16 13:07:03.983: INFO: Waiting for pod pod-subpath-test-downwardapi-rl6w to disappear Feb 16 13:07:04.004: INFO: Pod pod-subpath-test-downwardapi-rl6w no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-rl6w Feb 16 13:07:04.004: INFO: Deleting pod "pod-subpath-test-downwardapi-rl6w" in namespace "subpath-8178" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 16 13:07:04.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8178" for this suite. Feb 16 13:07:10.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 13:07:10.265: INFO: namespace subpath-8178 deletion completed in 6.251151651s • [SLOW TEST:38.724 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 16 13:07:10.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Feb 16 13:07:10.353: INFO: Waiting up to 5m0s for pod "downward-api-70e6cd42-58c9-45c5-847e-c5b8fec9af94" in namespace "downward-api-4682" to be "success or failure" Feb 16 13:07:10.418: INFO: Pod "downward-api-70e6cd42-58c9-45c5-847e-c5b8fec9af94": Phase="Pending", Reason="", readiness=false. Elapsed: 64.67553ms Feb 16 13:07:12.429: INFO: Pod "downward-api-70e6cd42-58c9-45c5-847e-c5b8fec9af94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075446495s Feb 16 13:07:14.436: INFO: Pod "downward-api-70e6cd42-58c9-45c5-847e-c5b8fec9af94": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08265996s Feb 16 13:07:16.450: INFO: Pod "downward-api-70e6cd42-58c9-45c5-847e-c5b8fec9af94": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096952724s Feb 16 13:07:18.566: INFO: Pod "downward-api-70e6cd42-58c9-45c5-847e-c5b8fec9af94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.212901761s STEP: Saw pod success Feb 16 13:07:18.566: INFO: Pod "downward-api-70e6cd42-58c9-45c5-847e-c5b8fec9af94" satisfied condition "success or failure" Feb 16 13:07:18.572: INFO: Trying to get logs from node iruya-node pod downward-api-70e6cd42-58c9-45c5-847e-c5b8fec9af94 container dapi-container: STEP: delete the pod Feb 16 13:07:18.730: INFO: Waiting for pod downward-api-70e6cd42-58c9-45c5-847e-c5b8fec9af94 to disappear Feb 16 13:07:18.736: INFO: Pod downward-api-70e6cd42-58c9-45c5-847e-c5b8fec9af94 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 16 13:07:18.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4682" for this suite. Feb 16 13:07:24.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 13:07:24.912: INFO: namespace downward-api-4682 deletion completed in 6.170008105s • [SLOW TEST:14.647 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 16 13:07:24.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 16 13:07:25.021: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a99384ef-a22c-4d87-ae7b-9dda84881dee" in namespace "projected-4158" to be "success or failure" Feb 16 13:07:25.032: INFO: Pod "downwardapi-volume-a99384ef-a22c-4d87-ae7b-9dda84881dee": Phase="Pending", Reason="", readiness=false. Elapsed: 11.532203ms Feb 16 13:07:27.041: INFO: Pod "downwardapi-volume-a99384ef-a22c-4d87-ae7b-9dda84881dee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020248187s Feb 16 13:07:29.050: INFO: Pod "downwardapi-volume-a99384ef-a22c-4d87-ae7b-9dda84881dee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029458834s Feb 16 13:07:31.057: INFO: Pod "downwardapi-volume-a99384ef-a22c-4d87-ae7b-9dda84881dee": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036720171s Feb 16 13:07:33.074: INFO: Pod "downwardapi-volume-a99384ef-a22c-4d87-ae7b-9dda84881dee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052910333s STEP: Saw pod success Feb 16 13:07:33.074: INFO: Pod "downwardapi-volume-a99384ef-a22c-4d87-ae7b-9dda84881dee" satisfied condition "success or failure" Feb 16 13:07:33.079: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a99384ef-a22c-4d87-ae7b-9dda84881dee container client-container: STEP: delete the pod Feb 16 13:07:33.230: INFO: Waiting for pod downwardapi-volume-a99384ef-a22c-4d87-ae7b-9dda84881dee to disappear Feb 16 13:07:33.234: INFO: Pod downwardapi-volume-a99384ef-a22c-4d87-ae7b-9dda84881dee no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 16 13:07:33.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4158" for this suite. Feb 16 13:07:39.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 13:07:39.399: INFO: namespace projected-4158 deletion completed in 6.162287935s • [SLOW TEST:14.487 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 16 13:07:39.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-655.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-655.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 16 13:07:51.659: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-655/dns-test-e102215b-66e0-4af9-9940-a4eff78247ec: the server could not find the requested resource (get pods dns-test-e102215b-66e0-4af9-9940-a4eff78247ec) Feb 16 13:07:51.672: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-655/dns-test-e102215b-66e0-4af9-9940-a4eff78247ec: the server could not find the requested resource (get pods dns-test-e102215b-66e0-4af9-9940-a4eff78247ec) Feb 16 13:07:51.680: INFO: Unable to read wheezy_udp@PodARecord from pod dns-655/dns-test-e102215b-66e0-4af9-9940-a4eff78247ec: the server could not find the requested resource (get pods dns-test-e102215b-66e0-4af9-9940-a4eff78247ec) Feb 16 13:07:51.686: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-655/dns-test-e102215b-66e0-4af9-9940-a4eff78247ec: the server could not find the requested resource (get pods dns-test-e102215b-66e0-4af9-9940-a4eff78247ec) Feb 16 13:07:51.692: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-655/dns-test-e102215b-66e0-4af9-9940-a4eff78247ec: the server could not find the requested resource (get pods dns-test-e102215b-66e0-4af9-9940-a4eff78247ec) Feb 16 13:07:51.697: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-655/dns-test-e102215b-66e0-4af9-9940-a4eff78247ec: the server could not find the requested resource (get pods dns-test-e102215b-66e0-4af9-9940-a4eff78247ec) Feb 16 13:07:51.702: INFO: Unable to read jessie_udp@PodARecord from pod dns-655/dns-test-e102215b-66e0-4af9-9940-a4eff78247ec: the server could not find the requested resource (get pods dns-test-e102215b-66e0-4af9-9940-a4eff78247ec) Feb 16 13:07:51.710: INFO: Unable to read jessie_tcp@PodARecord from pod dns-655/dns-test-e102215b-66e0-4af9-9940-a4eff78247ec: the server could not find the requested resource (get pods dns-test-e102215b-66e0-4af9-9940-a4eff78247ec) Feb 16 13:07:51.710: INFO: Lookups using dns-655/dns-test-e102215b-66e0-4af9-9940-a4eff78247ec failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord] Feb 16 13:07:56.774: INFO: DNS probes using dns-655/dns-test-e102215b-66e0-4af9-9940-a4eff78247ec succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 16 13:07:56.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-655" for this suite. Feb 16 13:08:02.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 13:08:03.090: INFO: namespace dns-655 deletion completed in 6.238319887s • [SLOW TEST:23.690 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 16 13:08:03.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Feb 16 13:08:04.024: INFO: Pod name wrapped-volume-race-c7c0d279-4ad3-4543-8b16-6c3d3be44d5f: Found 0 pods out of 5 Feb 16 13:08:09.037: INFO: Pod name wrapped-volume-race-c7c0d279-4ad3-4543-8b16-6c3d3be44d5f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-c7c0d279-4ad3-4543-8b16-6c3d3be44d5f in namespace emptydir-wrapper-4037, will wait for the garbage collector to delete the pods Feb 16 13:08:39.189: INFO: Deleting ReplicationController wrapped-volume-race-c7c0d279-4ad3-4543-8b16-6c3d3be44d5f took: 15.538529ms Feb 16 13:08:39.590: INFO: Terminating ReplicationController wrapped-volume-race-c7c0d279-4ad3-4543-8b16-6c3d3be44d5f pods took: 400.490535ms STEP: Creating RC which spawns configmap-volume pods Feb 16 13:09:27.337: INFO: Pod name wrapped-volume-race-3ffa5363-b816-44cf-a89d-9b5a3373eba6: Found 0 pods out of 5 Feb 16 13:09:32.348: INFO: Pod name wrapped-volume-race-3ffa5363-b816-44cf-a89d-9b5a3373eba6: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-3ffa5363-b816-44cf-a89d-9b5a3373eba6 in namespace emptydir-wrapper-4037, will wait for the garbage collector to delete the pods Feb 16 13:10:02.467: INFO: Deleting ReplicationController wrapped-volume-race-3ffa5363-b816-44cf-a89d-9b5a3373eba6 took: 28.598309ms Feb 16 13:10:02.868: INFO: Terminating ReplicationController wrapped-volume-race-3ffa5363-b816-44cf-a89d-9b5a3373eba6 pods took: 400.822074ms STEP: Creating RC which spawns configmap-volume pods Feb 16 13:10:46.943: INFO: Pod name wrapped-volume-race-cfa6729d-e5d0-41a8-bb01-b54811cdac7e: Found 0 pods out of 5 Feb 16 13:10:51.996: INFO: Pod name wrapped-volume-race-cfa6729d-e5d0-41a8-bb01-b54811cdac7e: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-cfa6729d-e5d0-41a8-bb01-b54811cdac7e in namespace emptydir-wrapper-4037, will wait for the garbage collector to delete the pods Feb 16 13:11:18.109: INFO: Deleting ReplicationController wrapped-volume-race-cfa6729d-e5d0-41a8-bb01-b54811cdac7e took: 15.129085ms Feb 16 13:11:18.510: INFO: Terminating ReplicationController wrapped-volume-race-cfa6729d-e5d0-41a8-bb01-b54811cdac7e pods took: 400.974574ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 16 13:12:08.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-4037" for this suite. Feb 16 13:12:18.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 13:12:19.084: INFO: namespace emptydir-wrapper-4037 deletion completed in 10.159962983s • [SLOW TEST:255.994 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 16 13:12:19.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 16 13:12:19.230: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 17.472686ms)
Feb 16 13:12:19.238: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.658499ms)
Feb 16 13:12:19.248: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.299837ms)
Feb 16 13:12:19.256: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.145704ms)
Feb 16 13:12:19.263: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.156066ms)
Feb 16 13:12:19.269: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.575259ms)
Feb 16 13:12:19.276: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.756687ms)
Feb 16 13:12:19.281: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.844564ms)
Feb 16 13:12:19.288: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.696135ms)
Feb 16 13:12:19.337: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 49.577778ms)
Feb 16 13:12:19.346: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.200274ms)
Feb 16 13:12:19.364: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 18.499663ms)
Feb 16 13:12:19.371: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.120007ms)
Feb 16 13:12:19.380: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.80826ms)
Feb 16 13:12:19.405: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 25.340063ms)
Feb 16 13:12:19.413: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.105563ms)
Feb 16 13:12:19.418: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.666174ms)
Feb 16 13:12:19.424: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.755488ms)
Feb 16 13:12:19.430: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.966803ms)
Feb 16 13:12:19.468: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 37.738926ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:12:19.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-6494" for this suite.
Feb 16 13:12:25.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:12:25.739: INFO: namespace proxy-6494 deletion completed in 6.265006258s

• [SLOW TEST:6.655 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:12:25.740: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 16 13:12:25.912: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Feb 16 13:12:25.965: INFO: Number of nodes with available pods: 0
Feb 16 13:12:25.965: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Feb 16 13:12:26.152: INFO: Number of nodes with available pods: 0
Feb 16 13:12:26.152: INFO: Node iruya-node is running more than one daemon pod
Feb 16 13:12:27.163: INFO: Number of nodes with available pods: 0
Feb 16 13:12:27.163: INFO: Node iruya-node is running more than one daemon pod
Feb 16 13:12:28.161: INFO: Number of nodes with available pods: 0
Feb 16 13:12:28.161: INFO: Node iruya-node is running more than one daemon pod
Feb 16 13:12:29.159: INFO: Number of nodes with available pods: 0
Feb 16 13:12:29.159: INFO: Node iruya-node is running more than one daemon pod
Feb 16 13:12:30.164: INFO: Number of nodes with available pods: 0
Feb 16 13:12:30.164: INFO: Node iruya-node is running more than one daemon pod
Feb 16 13:12:31.160: INFO: Number of nodes with available pods: 0
Feb 16 13:12:31.160: INFO: Node iruya-node is running more than one daemon pod
Feb 16 13:12:32.243: INFO: Number of nodes with available pods: 0
Feb 16 13:12:32.243: INFO: Node iruya-node is running more than one daemon pod
Feb 16 13:12:33.159: INFO: Number of nodes with available pods: 0
Feb 16 13:12:33.159: INFO: Node iruya-node is running more than one daemon pod
Feb 16 13:12:34.158: INFO: Number of nodes with available pods: 0
Feb 16 13:12:34.158: INFO: Node iruya-node is running more than one daemon pod
Feb 16 13:12:35.158: INFO: Number of nodes with available pods: 0
Feb 16 13:12:35.158: INFO: Node iruya-node is running more than one daemon pod
Feb 16 13:12:36.162: INFO: Number of nodes with available pods: 0
Feb 16 13:12:36.162: INFO: Node iruya-node is running more than one daemon pod
Feb 16 13:12:37.159: INFO: Number of nodes with available pods: 0
Feb 16 13:12:37.159: INFO: Node iruya-node is running more than one daemon pod
Feb 16 13:12:38.164: INFO: Number of nodes with available pods: 0
Feb 16 13:12:38.164: INFO: Node iruya-node is running more than one daemon pod
Feb 16 13:12:39.159: INFO: Number of nodes with available pods: 1
Feb 16 13:12:39.159: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Feb 16 13:12:39.216: INFO: Number of nodes with available pods: 1
Feb 16 13:12:39.216: INFO: Number of running nodes: 0, number of available pods: 1
Feb 16 13:12:40.224: INFO: Number of nodes with available pods: 0
Feb 16 13:12:40.224: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Feb 16 13:12:40.246: INFO: Number of nodes with available pods: 0
Feb 16 13:12:40.246: INFO: Node iruya-node is running more than one daemon pod
Feb 16 13:12:41.254: INFO: Number of nodes with available pods: 0
Feb 16 13:12:41.254: INFO: Node iruya-node is running more than one daemon pod
Feb 16 13:12:42.260: INFO: Number of nodes with available pods: 0
Feb 16 13:12:42.260: INFO: Node iruya-node is running more than one daemon pod
Feb 16 13:12:43.258: INFO: Number of nodes with available pods: 0
Feb 16 13:12:43.258: INFO: Node iruya-node is running more than one daemon pod
Feb 16 13:12:44.255: INFO: Number of nodes with available pods: 0
Feb 16 13:12:44.255: INFO: Node iruya-node is running more than one daemon pod
Feb 16 13:12:45.259: INFO: Number of nodes with available pods: 0
Feb 16 13:12:45.259: INFO: Node iruya-node is running more than one daemon pod
Feb 16 13:12:46.254: INFO: Number of nodes with available pods: 0
Feb 16 13:12:46.254: INFO: Node iruya-node is running more than one daemon pod
Feb 16 13:12:47.258: INFO: Number of nodes with available pods: 0
Feb 16 13:12:47.258: INFO: Node iruya-node is running more than one daemon pod
Feb 16 13:12:48.257: INFO: Number of nodes with available pods: 0
Feb 16 13:12:48.257: INFO: Node iruya-node is running more than one daemon pod
Feb 16 13:12:49.254: INFO: Number of nodes with available pods: 0
Feb 16 13:12:49.254: INFO: Node iruya-node is running more than one daemon pod
Feb 16 13:12:50.255: INFO: Number of nodes with available pods: 0
Feb 16 13:12:50.255: INFO: Node iruya-node is running more than one daemon pod
Feb 16 13:12:51.251: INFO: Number of nodes with available pods: 0
Feb 16 13:12:51.251: INFO: Node iruya-node is running more than one daemon pod
Feb 16 13:12:52.268: INFO: Number of nodes with available pods: 0
Feb 16 13:12:52.268: INFO: Node iruya-node is running more than one daemon pod
Feb 16 13:12:53.259: INFO: Number of nodes with available pods: 0
Feb 16 13:12:53.259: INFO: Node iruya-node is running more than one daemon pod
Feb 16 13:12:54.261: INFO: Number of nodes with available pods: 0
Feb 16 13:12:54.261: INFO: Node iruya-node is running more than one daemon pod
Feb 16 13:12:55.256: INFO: Number of nodes with available pods: 0
Feb 16 13:12:55.256: INFO: Node iruya-node is running more than one daemon pod
Feb 16 13:12:56.258: INFO: Number of nodes with available pods: 0
Feb 16 13:12:56.258: INFO: Node iruya-node is running more than one daemon pod
Feb 16 13:12:57.258: INFO: Number of nodes with available pods: 0
Feb 16 13:12:57.258: INFO: Node iruya-node is running more than one daemon pod
Feb 16 13:12:58.252: INFO: Number of nodes with available pods: 0
Feb 16 13:12:58.252: INFO: Node iruya-node is running more than one daemon pod
Feb 16 13:12:59.256: INFO: Number of nodes with available pods: 0
Feb 16 13:12:59.257: INFO: Node iruya-node is running more than one daemon pod
Feb 16 13:13:00.251: INFO: Number of nodes with available pods: 0
Feb 16 13:13:00.251: INFO: Node iruya-node is running more than one daemon pod
Feb 16 13:13:01.258: INFO: Number of nodes with available pods: 0
Feb 16 13:13:01.258: INFO: Node iruya-node is running more than one daemon pod
Feb 16 13:13:02.738: INFO: Number of nodes with available pods: 0
Feb 16 13:13:02.738: INFO: Node iruya-node is running more than one daemon pod
Feb 16 13:13:03.255: INFO: Number of nodes with available pods: 0
Feb 16 13:13:03.255: INFO: Node iruya-node is running more than one daemon pod
Feb 16 13:13:04.252: INFO: Number of nodes with available pods: 1
Feb 16 13:13:04.252: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9415, will wait for the garbage collector to delete the pods
Feb 16 13:13:04.327: INFO: Deleting DaemonSet.extensions daemon-set took: 14.711934ms
Feb 16 13:13:04.628: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.661463ms
Feb 16 13:13:16.642: INFO: Number of nodes with available pods: 0
Feb 16 13:13:16.642: INFO: Number of running nodes: 0, number of available pods: 0
Feb 16 13:13:16.649: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9415/daemonsets","resourceVersion":"24572186"},"items":null}

Feb 16 13:13:16.653: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9415/pods","resourceVersion":"24572186"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:13:16.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9415" for this suite.
Feb 16 13:13:22.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:13:22.947: INFO: namespace daemonsets-9415 deletion completed in 6.167276823s

• [SLOW TEST:57.207 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:13:22.947: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Feb 16 13:13:31.471: INFO: 10 pods remaining
Feb 16 13:13:31.471: INFO: 5 pods has nil DeletionTimestamp
Feb 16 13:13:31.471: INFO: 
Feb 16 13:13:32.151: INFO: 0 pods remaining
Feb 16 13:13:32.151: INFO: 0 pods has nil DeletionTimestamp
Feb 16 13:13:32.151: INFO: 
STEP: Gathering metrics
W0216 13:13:33.037414       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 16 13:13:33.037: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:13:33.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-170" for this suite.
Feb 16 13:13:43.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:13:43.163: INFO: namespace gc-170 deletion completed in 10.123202073s

• [SLOW TEST:20.216 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:13:43.164: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0216 13:14:13.954488       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 16 13:14:13.954: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:14:13.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6525" for this suite.
Feb 16 13:14:22.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:14:22.268: INFO: namespace gc-6525 deletion completed in 8.31011102s

• [SLOW TEST:39.104 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:14:22.268: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-6c72657f-1d58-43c5-89e7-17a591a7b83c
STEP: Creating a pod to test consume secrets
Feb 16 13:14:23.738: INFO: Waiting up to 5m0s for pod "pod-secrets-6d8f87b7-1618-47ee-a9fc-749975f91a24" in namespace "secrets-2199" to be "success or failure"
Feb 16 13:14:23.749: INFO: Pod "pod-secrets-6d8f87b7-1618-47ee-a9fc-749975f91a24": Phase="Pending", Reason="", readiness=false. Elapsed: 11.435986ms
Feb 16 13:14:25.802: INFO: Pod "pod-secrets-6d8f87b7-1618-47ee-a9fc-749975f91a24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064667093s
Feb 16 13:14:27.882: INFO: Pod "pod-secrets-6d8f87b7-1618-47ee-a9fc-749975f91a24": Phase="Pending", Reason="", readiness=false. Elapsed: 4.144669069s
Feb 16 13:14:29.890: INFO: Pod "pod-secrets-6d8f87b7-1618-47ee-a9fc-749975f91a24": Phase="Pending", Reason="", readiness=false. Elapsed: 6.151766304s
Feb 16 13:14:31.902: INFO: Pod "pod-secrets-6d8f87b7-1618-47ee-a9fc-749975f91a24": Phase="Pending", Reason="", readiness=false. Elapsed: 8.164139611s
Feb 16 13:14:34.043: INFO: Pod "pod-secrets-6d8f87b7-1618-47ee-a9fc-749975f91a24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.305521724s
STEP: Saw pod success
Feb 16 13:14:34.043: INFO: Pod "pod-secrets-6d8f87b7-1618-47ee-a9fc-749975f91a24" satisfied condition "success or failure"
Feb 16 13:14:34.051: INFO: Trying to get logs from node iruya-node pod pod-secrets-6d8f87b7-1618-47ee-a9fc-749975f91a24 container secret-volume-test: 
STEP: delete the pod
Feb 16 13:14:34.370: INFO: Waiting for pod pod-secrets-6d8f87b7-1618-47ee-a9fc-749975f91a24 to disappear
Feb 16 13:14:34.389: INFO: Pod pod-secrets-6d8f87b7-1618-47ee-a9fc-749975f91a24 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:14:34.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2199" for this suite.
Feb 16 13:14:40.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:14:40.649: INFO: namespace secrets-2199 deletion completed in 6.190089659s

• [SLOW TEST:18.381 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:14:40.649: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-7fe7648c-bdae-442f-b745-ab1e7df4c3f7
STEP: Creating a pod to test consume secrets
Feb 16 13:14:40.794: INFO: Waiting up to 5m0s for pod "pod-secrets-cb8f90df-76f5-45a7-98d9-3bca05b9caef" in namespace "secrets-8360" to be "success or failure"
Feb 16 13:14:40.804: INFO: Pod "pod-secrets-cb8f90df-76f5-45a7-98d9-3bca05b9caef": Phase="Pending", Reason="", readiness=false. Elapsed: 9.530574ms
Feb 16 13:14:42.810: INFO: Pod "pod-secrets-cb8f90df-76f5-45a7-98d9-3bca05b9caef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015933675s
Feb 16 13:14:44.821: INFO: Pod "pod-secrets-cb8f90df-76f5-45a7-98d9-3bca05b9caef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026718263s
Feb 16 13:14:46.833: INFO: Pod "pod-secrets-cb8f90df-76f5-45a7-98d9-3bca05b9caef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038762483s
Feb 16 13:14:48.850: INFO: Pod "pod-secrets-cb8f90df-76f5-45a7-98d9-3bca05b9caef": Phase="Pending", Reason="", readiness=false. Elapsed: 8.055947329s
Feb 16 13:14:50.873: INFO: Pod "pod-secrets-cb8f90df-76f5-45a7-98d9-3bca05b9caef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.078507128s
STEP: Saw pod success
Feb 16 13:14:50.873: INFO: Pod "pod-secrets-cb8f90df-76f5-45a7-98d9-3bca05b9caef" satisfied condition "success or failure"
Feb 16 13:14:50.879: INFO: Trying to get logs from node iruya-node pod pod-secrets-cb8f90df-76f5-45a7-98d9-3bca05b9caef container secret-volume-test: 
STEP: delete the pod
Feb 16 13:14:50.983: INFO: Waiting for pod pod-secrets-cb8f90df-76f5-45a7-98d9-3bca05b9caef to disappear
Feb 16 13:14:51.000: INFO: Pod pod-secrets-cb8f90df-76f5-45a7-98d9-3bca05b9caef no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:14:51.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8360" for this suite.
Feb 16 13:14:57.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:14:57.158: INFO: namespace secrets-8360 deletion completed in 6.151776653s

• [SLOW TEST:16.509 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:14:57.158: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:15:06.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2150" for this suite.
Feb 16 13:15:29.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:15:29.548: INFO: namespace replication-controller-2150 deletion completed in 23.041266533s

• [SLOW TEST:32.390 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:15:29.548: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb 16 13:15:29.776: INFO: Waiting up to 5m0s for pod "pod-1422b5a9-89e8-4662-9a36-670deb9a48df" in namespace "emptydir-7226" to be "success or failure"
Feb 16 13:15:29.791: INFO: Pod "pod-1422b5a9-89e8-4662-9a36-670deb9a48df": Phase="Pending", Reason="", readiness=false. Elapsed: 15.470076ms
Feb 16 13:15:31.810: INFO: Pod "pod-1422b5a9-89e8-4662-9a36-670deb9a48df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033881723s
Feb 16 13:15:33.858: INFO: Pod "pod-1422b5a9-89e8-4662-9a36-670deb9a48df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082019542s
Feb 16 13:15:35.870: INFO: Pod "pod-1422b5a9-89e8-4662-9a36-670deb9a48df": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094350999s
Feb 16 13:15:37.886: INFO: Pod "pod-1422b5a9-89e8-4662-9a36-670deb9a48df": Phase="Pending", Reason="", readiness=false. Elapsed: 8.109659729s
Feb 16 13:15:39.897: INFO: Pod "pod-1422b5a9-89e8-4662-9a36-670deb9a48df": Phase="Running", Reason="", readiness=true. Elapsed: 10.12069222s
Feb 16 13:15:41.903: INFO: Pod "pod-1422b5a9-89e8-4662-9a36-670deb9a48df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.126873987s
STEP: Saw pod success
Feb 16 13:15:41.903: INFO: Pod "pod-1422b5a9-89e8-4662-9a36-670deb9a48df" satisfied condition "success or failure"
Feb 16 13:15:41.907: INFO: Trying to get logs from node iruya-node pod pod-1422b5a9-89e8-4662-9a36-670deb9a48df container test-container: 
STEP: delete the pod
Feb 16 13:15:41.959: INFO: Waiting for pod pod-1422b5a9-89e8-4662-9a36-670deb9a48df to disappear
Feb 16 13:15:41.972: INFO: Pod pod-1422b5a9-89e8-4662-9a36-670deb9a48df no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:15:41.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7226" for this suite.
Feb 16 13:15:48.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:15:48.176: INFO: namespace emptydir-7226 deletion completed in 6.19893399s

• [SLOW TEST:18.628 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:15:48.177: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Feb 16 13:15:48.258: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 16 13:15:48.267: INFO: Waiting for terminating namespaces to be deleted...
Feb 16 13:15:48.269: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Feb 16 13:15:48.282: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Feb 16 13:15:48.282: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 16 13:15:48.282: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Feb 16 13:15:48.282: INFO: 	Container weave ready: true, restart count 0
Feb 16 13:15:48.282: INFO: 	Container weave-npc ready: true, restart count 0
Feb 16 13:15:48.282: INFO: kube-bench-j7kcs from default started at 2020-02-11 06:42:30 +0000 UTC (1 container statuses recorded)
Feb 16 13:15:48.282: INFO: 	Container kube-bench ready: false, restart count 0
Feb 16 13:15:48.282: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Feb 16 13:15:48.292: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Feb 16 13:15:48.292: INFO: 	Container kube-scheduler ready: true, restart count 13
Feb 16 13:15:48.292: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb 16 13:15:48.292: INFO: 	Container coredns ready: true, restart count 0
Feb 16 13:15:48.292: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Feb 16 13:15:48.292: INFO: 	Container etcd ready: true, restart count 0
Feb 16 13:15:48.292: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Feb 16 13:15:48.292: INFO: 	Container weave ready: true, restart count 0
Feb 16 13:15:48.292: INFO: 	Container weave-npc ready: true, restart count 0
Feb 16 13:15:48.292: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb 16 13:15:48.292: INFO: 	Container coredns ready: true, restart count 0
Feb 16 13:15:48.292: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Feb 16 13:15:48.292: INFO: 	Container kube-controller-manager ready: true, restart count 21
Feb 16 13:15:48.292: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Feb 16 13:15:48.292: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 16 13:15:48.292: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Feb 16 13:15:48.292: INFO: 	Container kube-apiserver ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15f3e4551c5ee17b], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:15:49.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-6493" for this suite.
Feb 16 13:15:55.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:15:55.481: INFO: namespace sched-pred-6493 deletion completed in 6.121482897s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.304 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:15:55.482: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 16 13:15:55.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2987'
Feb 16 13:15:58.675: INFO: stderr: ""
Feb 16 13:15:58.675: INFO: stdout: "replicationcontroller/redis-master created\n"
Feb 16 13:15:58.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2987'
Feb 16 13:15:59.134: INFO: stderr: ""
Feb 16 13:15:59.135: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb 16 13:16:00.146: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 13:16:00.146: INFO: Found 0 / 1
Feb 16 13:16:01.147: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 13:16:01.147: INFO: Found 0 / 1
Feb 16 13:16:02.179: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 13:16:02.179: INFO: Found 0 / 1
Feb 16 13:16:03.142: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 13:16:03.142: INFO: Found 0 / 1
Feb 16 13:16:04.143: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 13:16:04.143: INFO: Found 0 / 1
Feb 16 13:16:05.143: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 13:16:05.144: INFO: Found 0 / 1
Feb 16 13:16:06.148: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 13:16:06.148: INFO: Found 0 / 1
Feb 16 13:16:07.381: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 13:16:07.381: INFO: Found 0 / 1
Feb 16 13:16:08.150: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 13:16:08.150: INFO: Found 1 / 1
Feb 16 13:16:08.150: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 16 13:16:08.155: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 13:16:08.155: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 16 13:16:08.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-rkmlc --namespace=kubectl-2987'
Feb 16 13:16:08.303: INFO: stderr: ""
Feb 16 13:16:08.303: INFO: stdout: "Name:           redis-master-rkmlc\nNamespace:      kubectl-2987\nPriority:       0\nNode:           iruya-node/10.96.3.65\nStart Time:     Sun, 16 Feb 2020 13:15:58 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.44.0.1\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://cbfb28d4b784b102eebee3c03afdd88f6b81ee84aac5189aca00cf5f405502c9\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sun, 16 Feb 2020 13:16:07 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5rvrf (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-5rvrf:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-5rvrf\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                 Message\n  ----    ------     ----  ----                 -------\n  Normal  Scheduled  10s   default-scheduler    Successfully assigned kubectl-2987/redis-master-rkmlc to iruya-node\n  Normal  Pulled     4s    kubelet, iruya-node  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    2s    kubelet, iruya-node  Created container redis-master\n  Normal  Started    1s    kubelet, iruya-node  Started container redis-master\n"
Feb 16 13:16:08.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-2987'
Feb 16 13:16:08.421: INFO: stderr: ""
Feb 16 13:16:08.421: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-2987\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  10s   replication-controller  Created pod: redis-master-rkmlc\n"
Feb 16 13:16:08.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-2987'
Feb 16 13:16:08.531: INFO: stderr: ""
Feb 16 13:16:08.531: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-2987\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.98.204.216\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Feb 16 13:16:08.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node'
Feb 16 13:16:08.639: INFO: stderr: ""
Feb 16 13:16:08.639: INFO: stdout: "Name:               iruya-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 04 Aug 2019 09:01:39 +0000\nTaints:             \nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 12 Oct 2019 11:56:49 +0000   Sat, 12 Oct 2019 11:56:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Sun, 16 Feb 2020 13:15:26 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Sun, 16 Feb 2020 13:15:26 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Sun, 16 Feb 2020 13:15:26 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Sun, 16 Feb 2020 13:15:26 +0000   Sun, 04 Aug 2019 09:02:19 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.3.65\n  Hostname:    iruya-node\nCapacity:\n cpu:                4\n ephemeral-storage:  20145724Ki\n hugepages-2Mi:      0\n memory:             4039076Ki\n pods:               110\nAllocatable:\n cpu:                4\n ephemeral-storage:  18566299208\n hugepages-2Mi:      0\n memory:             3936676Ki\n pods:               110\nSystem Info:\n Machine ID:                 f573dcf04d6f4a87856a35d266a2fa7a\n System UUID:                F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID:                    8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version:             4.15.0-52-generic\n OS Image:                   Ubuntu 18.04.2 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.9.7\n Kubelet Version:            v1.15.1\n Kube-Proxy Version:         v1.15.1\nPodCIDR:                     10.96.1.0/24\nNon-terminated Pods:         (3 in total)\n  Namespace                  Name                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                  ------------  ----------  ---------------  -------------  ---\n  kube-system                kube-proxy-976zl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         196d\n  kube-system                weave-net-rlp57       20m (0%)      0 (0%)      0 (0%)           0 (0%)         127d\n  kubectl-2987               redis-master-rkmlc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Feb 16 13:16:08.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-2987'
Feb 16 13:16:08.737: INFO: stderr: ""
Feb 16 13:16:08.737: INFO: stdout: "Name:         kubectl-2987\nLabels:       e2e-framework=kubectl\n              e2e-run=50cb5016-e60b-4227-8f03-b38718c02773\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:16:08.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2987" for this suite.
Feb 16 13:16:30.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:16:30.940: INFO: namespace kubectl-2987 deletion completed in 22.194121562s

• [SLOW TEST:35.458 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:16:30.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Feb 16 13:16:41.119: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-eddba25a-67b4-4784-8c26-70bd8aa3f0a3,GenerateName:,Namespace:events-6485,SelfLink:/api/v1/namespaces/events-6485/pods/send-events-eddba25a-67b4-4784-8c26-70bd8aa3f0a3,UID:0bd960b2-6612-40ee-adda-7b87939ab6ef,ResourceVersion:24572801,Generation:0,CreationTimestamp:2020-02-16 13:16:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 64867352,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-d2n4r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-d2n4r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-d2n4r true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000985d00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000985d20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:16:31 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:16:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:16:39 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:16:31 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-16 13:16:31 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-02-16 13:16:39 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://8b303a0f7e2093866af92c48de5b8ed4cf99961c875439c19cfe43fcb59b589e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Feb 16 13:16:43.125: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Feb 16 13:16:45.141: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:16:45.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-6485" for this suite.
Feb 16 13:17:29.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:17:29.425: INFO: namespace events-6485 deletion completed in 44.222771957s

• [SLOW TEST:58.485 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:17:29.426: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb 16 13:17:29.508: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:17:43.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1553" for this suite.
Feb 16 13:17:49.513: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:17:49.655: INFO: namespace init-container-1553 deletion completed in 6.192638216s

• [SLOW TEST:20.229 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:17:49.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-11fa0f0a-6381-4cbb-8985-471531880546
STEP: Creating a pod to test consume secrets
Feb 16 13:17:49.818: INFO: Waiting up to 5m0s for pod "pod-secrets-41bd3c84-a06d-4fea-8c79-6bc683514ae4" in namespace "secrets-661" to be "success or failure"
Feb 16 13:17:49.834: INFO: Pod "pod-secrets-41bd3c84-a06d-4fea-8c79-6bc683514ae4": Phase="Pending", Reason="", readiness=false. Elapsed: 16.073063ms
Feb 16 13:17:51.844: INFO: Pod "pod-secrets-41bd3c84-a06d-4fea-8c79-6bc683514ae4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026635647s
Feb 16 13:17:53.865: INFO: Pod "pod-secrets-41bd3c84-a06d-4fea-8c79-6bc683514ae4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047324354s
Feb 16 13:17:55.875: INFO: Pod "pod-secrets-41bd3c84-a06d-4fea-8c79-6bc683514ae4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057751843s
Feb 16 13:17:57.914: INFO: Pod "pod-secrets-41bd3c84-a06d-4fea-8c79-6bc683514ae4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.096755341s
STEP: Saw pod success
Feb 16 13:17:57.914: INFO: Pod "pod-secrets-41bd3c84-a06d-4fea-8c79-6bc683514ae4" satisfied condition "success or failure"
Feb 16 13:17:57.919: INFO: Trying to get logs from node iruya-node pod pod-secrets-41bd3c84-a06d-4fea-8c79-6bc683514ae4 container secret-volume-test: 
STEP: delete the pod
Feb 16 13:17:58.065: INFO: Waiting for pod pod-secrets-41bd3c84-a06d-4fea-8c79-6bc683514ae4 to disappear
Feb 16 13:17:58.079: INFO: Pod pod-secrets-41bd3c84-a06d-4fea-8c79-6bc683514ae4 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:17:58.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-661" for this suite.
Feb 16 13:18:04.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:18:04.293: INFO: namespace secrets-661 deletion completed in 6.20785029s

• [SLOW TEST:14.637 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:18:04.294: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-kcd96 in namespace proxy-306
I0216 13:18:04.457225       8 runners.go:180] Created replication controller with name: proxy-service-kcd96, namespace: proxy-306, replica count: 1
I0216 13:18:05.508552       8 runners.go:180] proxy-service-kcd96 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0216 13:18:06.508943       8 runners.go:180] proxy-service-kcd96 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0216 13:18:07.509344       8 runners.go:180] proxy-service-kcd96 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0216 13:18:08.510937       8 runners.go:180] proxy-service-kcd96 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0216 13:18:09.511405       8 runners.go:180] proxy-service-kcd96 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0216 13:18:10.511904       8 runners.go:180] proxy-service-kcd96 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0216 13:18:11.512270       8 runners.go:180] proxy-service-kcd96 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0216 13:18:12.512682       8 runners.go:180] proxy-service-kcd96 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0216 13:18:13.513070       8 runners.go:180] proxy-service-kcd96 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0216 13:18:14.513518       8 runners.go:180] proxy-service-kcd96 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 16 13:18:14.525: INFO: setup took 10.174643362s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Feb 16 13:18:14.563: INFO: (0) /api/v1/namespaces/proxy-306/pods/http:proxy-service-kcd96-sk7v2:160/proxy/: foo (200; 37.410512ms)
Feb 16 13:18:14.563: INFO: (0) /api/v1/namespaces/proxy-306/pods/http:proxy-service-kcd96-sk7v2:162/proxy/: bar (200; 37.394201ms)
Feb 16 13:18:14.563: INFO: (0) /api/v1/namespaces/proxy-306/pods/proxy-service-kcd96-sk7v2:162/proxy/: bar (200; 37.650038ms)
Feb 16 13:18:14.563: INFO: (0) /api/v1/namespaces/proxy-306/pods/proxy-service-kcd96-sk7v2:160/proxy/: foo (200; 37.516749ms)
Feb 16 13:18:14.563: INFO: (0) /api/v1/namespaces/proxy-306/pods/proxy-service-kcd96-sk7v2/proxy/: test (200; 37.724165ms)
Feb 16 13:18:14.563: INFO: (0) /api/v1/namespaces/proxy-306/pods/proxy-service-kcd96-sk7v2:1080/proxy/: testt... (200; 43.91853ms)
Feb 16 13:18:14.570: INFO: (0) /api/v1/namespaces/proxy-306/services/http:proxy-service-kcd96:portname1/proxy/: foo (200; 44.183289ms)
Feb 16 13:18:14.571: INFO: (0) /api/v1/namespaces/proxy-306/services/proxy-service-kcd96:portname1/proxy/: foo (200; 44.806379ms)
Feb 16 13:18:14.576: INFO: (0) /api/v1/namespaces/proxy-306/services/http:proxy-service-kcd96:portname2/proxy/: bar (200; 49.790223ms)
Feb 16 13:18:14.576: INFO: (0) /api/v1/namespaces/proxy-306/services/proxy-service-kcd96:portname2/proxy/: bar (200; 49.934035ms)
Feb 16 13:18:14.584: INFO: (0) /api/v1/namespaces/proxy-306/services/https:proxy-service-kcd96:tlsportname1/proxy/: tls baz (200; 58.584954ms)
Feb 16 13:18:14.585: INFO: (0) /api/v1/namespaces/proxy-306/pods/https:proxy-service-kcd96-sk7v2:460/proxy/: tls baz (200; 58.66618ms)
Feb 16 13:18:14.585: INFO: (0) /api/v1/namespaces/proxy-306/pods/https:proxy-service-kcd96-sk7v2:443/proxy/: t... (200; 12.557114ms)
Feb 16 13:18:14.606: INFO: (1) /api/v1/namespaces/proxy-306/pods/proxy-service-kcd96-sk7v2:162/proxy/: bar (200; 13.044ms)
Feb 16 13:18:14.608: INFO: (1) /api/v1/namespaces/proxy-306/pods/http:proxy-service-kcd96-sk7v2:160/proxy/: foo (200; 14.942372ms)
Feb 16 13:18:14.608: INFO: (1) /api/v1/namespaces/proxy-306/services/http:proxy-service-kcd96:portname2/proxy/: bar (200; 14.983052ms)
Feb 16 13:18:14.609: INFO: (1) /api/v1/namespaces/proxy-306/services/proxy-service-kcd96:portname2/proxy/: bar (200; 16.125165ms)
Feb 16 13:18:14.609: INFO: (1) /api/v1/namespaces/proxy-306/services/proxy-service-kcd96:portname1/proxy/: foo (200; 16.111675ms)
Feb 16 13:18:14.610: INFO: (1) /api/v1/namespaces/proxy-306/services/http:proxy-service-kcd96:portname1/proxy/: foo (200; 16.823253ms)
Feb 16 13:18:14.610: INFO: (1) /api/v1/namespaces/proxy-306/services/https:proxy-service-kcd96:tlsportname1/proxy/: tls baz (200; 17.564269ms)
Feb 16 13:18:14.611: INFO: (1) /api/v1/namespaces/proxy-306/services/https:proxy-service-kcd96:tlsportname2/proxy/: tls qux (200; 18.040811ms)
Feb 16 13:18:14.612: INFO: (1) /api/v1/namespaces/proxy-306/pods/https:proxy-service-kcd96-sk7v2:443/proxy/: testtest (200; 19.907756ms)
Feb 16 13:18:14.613: INFO: (1) /api/v1/namespaces/proxy-306/pods/https:proxy-service-kcd96-sk7v2:462/proxy/: tls qux (200; 20.085817ms)
Feb 16 13:18:14.627: INFO: (2) /api/v1/namespaces/proxy-306/pods/proxy-service-kcd96-sk7v2:160/proxy/: foo (200; 13.722174ms)
Feb 16 13:18:14.627: INFO: (2) /api/v1/namespaces/proxy-306/pods/http:proxy-service-kcd96-sk7v2:160/proxy/: foo (200; 13.598465ms)
Feb 16 13:18:14.629: INFO: (2) /api/v1/namespaces/proxy-306/pods/proxy-service-kcd96-sk7v2:1080/proxy/: testt... (200; 16.81345ms)
Feb 16 13:18:14.631: INFO: (2) /api/v1/namespaces/proxy-306/pods/proxy-service-kcd96-sk7v2:162/proxy/: bar (200; 17.36387ms)
Feb 16 13:18:14.631: INFO: (2) /api/v1/namespaces/proxy-306/services/https:proxy-service-kcd96:tlsportname2/proxy/: tls qux (200; 18.204597ms)
Feb 16 13:18:14.633: INFO: (2) /api/v1/namespaces/proxy-306/pods/proxy-service-kcd96-sk7v2/proxy/: test (200; 20.006547ms)
Feb 16 13:18:14.633: INFO: (2) /api/v1/namespaces/proxy-306/services/proxy-service-kcd96:portname1/proxy/: foo (200; 20.188319ms)
Feb 16 13:18:14.633: INFO: (2) /api/v1/namespaces/proxy-306/pods/http:proxy-service-kcd96-sk7v2:162/proxy/: bar (200; 20.200877ms)
Feb 16 13:18:14.634: INFO: (2) /api/v1/namespaces/proxy-306/pods/https:proxy-service-kcd96-sk7v2:460/proxy/: tls baz (200; 21.048115ms)
Feb 16 13:18:14.634: INFO: (2) /api/v1/namespaces/proxy-306/services/https:proxy-service-kcd96:tlsportname1/proxy/: tls baz (200; 21.023176ms)
Feb 16 13:18:14.634: INFO: (2) /api/v1/namespaces/proxy-306/services/proxy-service-kcd96:portname2/proxy/: bar (200; 21.042196ms)
Feb 16 13:18:14.634: INFO: (2) /api/v1/namespaces/proxy-306/pods/https:proxy-service-kcd96-sk7v2:443/proxy/: test (200; 18.769205ms)
Feb 16 13:18:14.656: INFO: (3) /api/v1/namespaces/proxy-306/services/proxy-service-kcd96:portname1/proxy/: foo (200; 20.518838ms)
Feb 16 13:18:14.656: INFO: (3) /api/v1/namespaces/proxy-306/pods/http:proxy-service-kcd96-sk7v2:1080/proxy/: t... (200; 20.378961ms)
Feb 16 13:18:14.656: INFO: (3) /api/v1/namespaces/proxy-306/pods/http:proxy-service-kcd96-sk7v2:162/proxy/: bar (200; 20.287397ms)
Feb 16 13:18:14.656: INFO: (3) /api/v1/namespaces/proxy-306/pods/http:proxy-service-kcd96-sk7v2:160/proxy/: foo (200; 20.376159ms)
Feb 16 13:18:14.657: INFO: (3) /api/v1/namespaces/proxy-306/services/http:proxy-service-kcd96:portname1/proxy/: foo (200; 21.566992ms)
Feb 16 13:18:14.657: INFO: (3) /api/v1/namespaces/proxy-306/pods/proxy-service-kcd96-sk7v2:1080/proxy/: testtestt... (200; 12.757407ms)
Feb 16 13:18:14.674: INFO: (4) /api/v1/namespaces/proxy-306/pods/https:proxy-service-kcd96-sk7v2:460/proxy/: tls baz (200; 13.778039ms)
Feb 16 13:18:14.674: INFO: (4) /api/v1/namespaces/proxy-306/pods/http:proxy-service-kcd96-sk7v2:160/proxy/: foo (200; 14.130847ms)
Feb 16 13:18:14.675: INFO: (4) /api/v1/namespaces/proxy-306/pods/https:proxy-service-kcd96-sk7v2:462/proxy/: tls qux (200; 14.338039ms)
Feb 16 13:18:14.675: INFO: (4) /api/v1/namespaces/proxy-306/services/proxy-service-kcd96:portname1/proxy/: foo (200; 15.383273ms)
Feb 16 13:18:14.676: INFO: (4) /api/v1/namespaces/proxy-306/services/proxy-service-kcd96:portname2/proxy/: bar (200; 15.437987ms)
Feb 16 13:18:14.676: INFO: (4) /api/v1/namespaces/proxy-306/pods/http:proxy-service-kcd96-sk7v2:162/proxy/: bar (200; 16.144873ms)
Feb 16 13:18:14.677: INFO: (4) /api/v1/namespaces/proxy-306/pods/proxy-service-kcd96-sk7v2/proxy/: test (200; 16.630704ms)
Feb 16 13:18:14.677: INFO: (4) /api/v1/namespaces/proxy-306/services/http:proxy-service-kcd96:portname1/proxy/: foo (200; 16.326703ms)
Feb 16 13:18:14.677: INFO: (4) /api/v1/namespaces/proxy-306/services/http:proxy-service-kcd96:portname2/proxy/: bar (200; 16.768837ms)
Feb 16 13:18:14.677: INFO: (4) /api/v1/namespaces/proxy-306/services/https:proxy-service-kcd96:tlsportname1/proxy/: tls baz (200; 16.860372ms)
Feb 16 13:18:14.677: INFO: (4) /api/v1/namespaces/proxy-306/services/https:proxy-service-kcd96:tlsportname2/proxy/: tls qux (200; 16.961671ms)
Feb 16 13:18:14.688: INFO: (5) /api/v1/namespaces/proxy-306/pods/http:proxy-service-kcd96-sk7v2:160/proxy/: foo (200; 9.649481ms)
Feb 16 13:18:14.688: INFO: (5) /api/v1/namespaces/proxy-306/pods/proxy-service-kcd96-sk7v2/proxy/: test (200; 9.72942ms)
Feb 16 13:18:14.688: INFO: (5) /api/v1/namespaces/proxy-306/pods/http:proxy-service-kcd96-sk7v2:162/proxy/: bar (200; 10.229219ms)
Feb 16 13:18:14.714: INFO: (5) /api/v1/namespaces/proxy-306/pods/proxy-service-kcd96-sk7v2:162/proxy/: bar (200; 35.297145ms)
Feb 16 13:18:14.714: INFO: (5) /api/v1/namespaces/proxy-306/services/https:proxy-service-kcd96:tlsportname2/proxy/: tls qux (200; 35.592313ms)
Feb 16 13:18:14.714: INFO: (5) /api/v1/namespaces/proxy-306/pods/https:proxy-service-kcd96-sk7v2:460/proxy/: tls baz (200; 35.787072ms)
Feb 16 13:18:14.714: INFO: (5) /api/v1/namespaces/proxy-306/pods/proxy-service-kcd96-sk7v2:160/proxy/: foo (200; 35.509655ms)
Feb 16 13:18:14.714: INFO: (5) /api/v1/namespaces/proxy-306/services/http:proxy-service-kcd96:portname1/proxy/: foo (200; 35.428152ms)
Feb 16 13:18:14.714: INFO: (5) /api/v1/namespaces/proxy-306/pods/http:proxy-service-kcd96-sk7v2:1080/proxy/: t... (200; 35.677364ms)
Feb 16 13:18:14.714: INFO: (5) /api/v1/namespaces/proxy-306/pods/https:proxy-service-kcd96-sk7v2:443/proxy/: testtestt... (200; 27.637533ms)
Feb 16 13:18:14.743: INFO: (6) /api/v1/namespaces/proxy-306/pods/https:proxy-service-kcd96-sk7v2:462/proxy/: tls qux (200; 28.259177ms)
Feb 16 13:18:14.743: INFO: (6) /api/v1/namespaces/proxy-306/services/proxy-service-kcd96:portname1/proxy/: foo (200; 28.293983ms)
Feb 16 13:18:14.743: INFO: (6) /api/v1/namespaces/proxy-306/pods/proxy-service-kcd96-sk7v2/proxy/: test (200; 28.089952ms)
Feb 16 13:18:14.743: INFO: (6) /api/v1/namespaces/proxy-306/pods/http:proxy-service-kcd96-sk7v2:160/proxy/: foo (200; 28.182218ms)
Feb 16 13:18:14.743: INFO: (6) /api/v1/namespaces/proxy-306/services/proxy-service-kcd96:portname2/proxy/: bar (200; 28.036953ms)
Feb 16 13:18:14.744: INFO: (6) /api/v1/namespaces/proxy-306/services/http:proxy-service-kcd96:portname2/proxy/: bar (200; 29.444731ms)
Feb 16 13:18:14.744: INFO: (6) /api/v1/namespaces/proxy-306/services/https:proxy-service-kcd96:tlsportname2/proxy/: tls qux (200; 29.451723ms)
Feb 16 13:18:14.745: INFO: (6) /api/v1/namespaces/proxy-306/pods/https:proxy-service-kcd96-sk7v2:460/proxy/: tls baz (200; 30.049898ms)
Feb 16 13:18:14.745: INFO: (6) /api/v1/namespaces/proxy-306/services/https:proxy-service-kcd96:tlsportname1/proxy/: tls baz (200; 31.039818ms)
Feb 16 13:18:14.746: INFO: (6) /api/v1/namespaces/proxy-306/services/http:proxy-service-kcd96:portname1/proxy/: foo (200; 31.624717ms)
Feb 16 13:18:14.747: INFO: (6) /api/v1/namespaces/proxy-306/pods/https:proxy-service-kcd96-sk7v2:443/proxy/: testtest (200; 20.74584ms)
Feb 16 13:18:14.770: INFO: (7) /api/v1/namespaces/proxy-306/services/proxy-service-kcd96:portname1/proxy/: foo (200; 21.57122ms)
Feb 16 13:18:14.770: INFO: (7) /api/v1/namespaces/proxy-306/services/http:proxy-service-kcd96:portname1/proxy/: foo (200; 22.007555ms)
Feb 16 13:18:14.770: INFO: (7) /api/v1/namespaces/proxy-306/services/proxy-service-kcd96:portname2/proxy/: bar (200; 22.345586ms)
Feb 16 13:18:14.770: INFO: (7) /api/v1/namespaces/proxy-306/pods/http:proxy-service-kcd96-sk7v2:1080/proxy/: t... (200; 22.375932ms)
Feb 16 13:18:14.771: INFO: (7) /api/v1/namespaces/proxy-306/services/https:proxy-service-kcd96:tlsportname2/proxy/: tls qux (200; 22.896488ms)
Feb 16 13:18:14.775: INFO: (7) /api/v1/namespaces/proxy-306/pods/https:proxy-service-kcd96-sk7v2:460/proxy/: tls baz (200; 26.874059ms)
Feb 16 13:18:14.775: INFO: (7) /api/v1/namespaces/proxy-306/services/https:proxy-service-kcd96:tlsportname1/proxy/: tls baz (200; 27.172305ms)
Feb 16 13:18:14.788: INFO: (8) /api/v1/namespaces/proxy-306/pods/proxy-service-kcd96-sk7v2/proxy/: test (200; 12.324107ms)
Feb 16 13:18:14.788: INFO: (8) /api/v1/namespaces/proxy-306/pods/proxy-service-kcd96-sk7v2:1080/proxy/: testt... (200; 12.879763ms)
Feb 16 13:18:14.789: INFO: (8) /api/v1/namespaces/proxy-306/pods/https:proxy-service-kcd96-sk7v2:462/proxy/: tls qux (200; 13.01557ms)
Feb 16 13:18:14.790: INFO: (8) /api/v1/namespaces/proxy-306/pods/https:proxy-service-kcd96-sk7v2:443/proxy/: test (200; 28.645381ms)
Feb 16 13:18:14.827: INFO: (9) /api/v1/namespaces/proxy-306/pods/http:proxy-service-kcd96-sk7v2:1080/proxy/: t... (200; 29.99384ms)
Feb 16 13:18:14.827: INFO: (9) /api/v1/namespaces/proxy-306/pods/proxy-service-kcd96-sk7v2:1080/proxy/: testt... (200; 28.228091ms)
Feb 16 13:18:14.859: INFO: (10) /api/v1/namespaces/proxy-306/pods/https:proxy-service-kcd96-sk7v2:462/proxy/: tls qux (200; 29.420896ms)
Feb 16 13:18:14.860: INFO: (10) /api/v1/namespaces/proxy-306/pods/proxy-service-kcd96-sk7v2:1080/proxy/: testtest (200; 30.308212ms)
Feb 16 13:18:14.860: INFO: (10) /api/v1/namespaces/proxy-306/services/https:proxy-service-kcd96:tlsportname1/proxy/: tls baz (200; 30.168585ms)
Feb 16 13:18:14.861: INFO: (10) /api/v1/namespaces/proxy-306/services/proxy-service-kcd96:portname2/proxy/: bar (200; 30.730765ms)
Feb 16 13:18:14.862: INFO: (10) /api/v1/namespaces/proxy-306/services/http:proxy-service-kcd96:portname1/proxy/: foo (200; 31.977122ms)
Feb 16 13:18:14.886: INFO: (10) /api/v1/namespaces/proxy-306/services/https:proxy-service-kcd96:tlsportname2/proxy/: tls qux (200; 55.561316ms)
Feb 16 13:18:14.921: INFO: (11) /api/v1/namespaces/proxy-306/pods/proxy-service-kcd96-sk7v2:162/proxy/: bar (200; 34.058272ms)
Feb 16 13:18:14.921: INFO: (11) /api/v1/namespaces/proxy-306/pods/https:proxy-service-kcd96-sk7v2:462/proxy/: tls qux (200; 34.037845ms)
Feb 16 13:18:14.921: INFO: (11) /api/v1/namespaces/proxy-306/pods/http:proxy-service-kcd96-sk7v2:160/proxy/: foo (200; 34.68224ms)
Feb 16 13:18:14.921: INFO: (11) /api/v1/namespaces/proxy-306/pods/proxy-service-kcd96-sk7v2:160/proxy/: foo (200; 34.258069ms)
Feb 16 13:18:14.921: INFO: (11) /api/v1/namespaces/proxy-306/pods/http:proxy-service-kcd96-sk7v2:162/proxy/: bar (200; 34.569831ms)
Feb 16 13:18:14.921: INFO: (11) /api/v1/namespaces/proxy-306/pods/https:proxy-service-kcd96-sk7v2:460/proxy/: tls baz (200; 34.365512ms)
Feb 16 13:18:14.921: INFO: (11) /api/v1/namespaces/proxy-306/pods/proxy-service-kcd96-sk7v2/proxy/: test (200; 34.431345ms)
Feb 16 13:18:14.923: INFO: (11) /api/v1/namespaces/proxy-306/pods/proxy-service-kcd96-sk7v2:1080/proxy/: testt... (200; 36.746376ms)
Feb 16 13:18:14.969: INFO: (11) /api/v1/namespaces/proxy-306/services/http:proxy-service-kcd96:portname2/proxy/: bar (200; 83.034775ms)
Feb 16 13:18:14.970: INFO: (11) /api/v1/namespaces/proxy-306/services/proxy-service-kcd96:portname2/proxy/: bar (200; 83.872531ms)
Feb 16 13:18:14.971: INFO: (11) /api/v1/namespaces/proxy-306/services/proxy-service-kcd96:portname1/proxy/: foo (200; 84.454041ms)
Feb 16 13:18:14.971: INFO: (11) /api/v1/namespaces/proxy-306/pods/https:proxy-service-kcd96-sk7v2:443/proxy/: testt... (200; 9.804289ms)
Feb 16 13:18:14.985: INFO: (12) /api/v1/namespaces/proxy-306/pods/https:proxy-service-kcd96-sk7v2:460/proxy/: tls baz (200; 10.48542ms)
Feb 16 13:18:14.990: INFO: (12) /api/v1/namespaces/proxy-306/services/proxy-service-kcd96:portname2/proxy/: bar (200; 14.530162ms)
Feb 16 13:18:14.990: INFO: (12) /api/v1/namespaces/proxy-306/services/https:proxy-service-kcd96:tlsportname2/proxy/: tls qux (200; 14.754989ms)
Feb 16 13:18:14.990: INFO: (12) /api/v1/namespaces/proxy-306/services/http:proxy-service-kcd96:portname1/proxy/: foo (200; 14.776088ms)
Feb 16 13:18:14.990: INFO: (12) /api/v1/namespaces/proxy-306/services/proxy-service-kcd96:portname1/proxy/: foo (200; 15.069453ms)
Feb 16 13:18:14.990: INFO: (12) /api/v1/namespaces/proxy-306/services/http:proxy-service-kcd96:portname2/proxy/: bar (200; 15.032886ms)
Feb 16 13:18:14.990: INFO: (12) /api/v1/namespaces/proxy-306/services/https:proxy-service-kcd96:tlsportname1/proxy/: tls baz (200; 15.055459ms)
Feb 16 13:18:14.992: INFO: (12) /api/v1/namespaces/proxy-306/pods/proxy-service-kcd96-sk7v2/proxy/: test (200; 16.700544ms)
Feb 16 13:18:15.009: INFO: (13) /api/v1/namespaces/proxy-306/pods/proxy-service-kcd96-sk7v2:162/proxy/: bar (200; 17.489252ms)
Feb 16 13:18:15.010: INFO: (13) /api/v1/namespaces/proxy-306/pods/proxy-service-kcd96-sk7v2:1080/proxy/: testt... (200; 22.47468ms)
Feb 16 13:18:15.014: INFO: (13) /api/v1/namespaces/proxy-306/pods/http:proxy-service-kcd96-sk7v2:160/proxy/: foo (200; 22.783119ms)
Feb 16 13:18:15.015: INFO: (13) /api/v1/namespaces/proxy-306/pods/proxy-service-kcd96-sk7v2:160/proxy/: foo (200; 22.83124ms)
Feb 16 13:18:15.015: INFO: (13) /api/v1/namespaces/proxy-306/pods/https:proxy-service-kcd96-sk7v2:462/proxy/: tls qux (200; 23.000901ms)
Feb 16 13:18:15.015: INFO: (13) /api/v1/namespaces/proxy-306/pods/https:proxy-service-kcd96-sk7v2:460/proxy/: tls baz (200; 23.786768ms)
Feb 16 13:18:15.015: INFO: (13) /api/v1/namespaces/proxy-306/services/http:proxy-service-kcd96:portname2/proxy/: bar (200; 23.763767ms)
Feb 16 13:18:15.017: INFO: (13) /api/v1/namespaces/proxy-306/services/proxy-service-kcd96:portname2/proxy/: bar (200; 24.880224ms)
Feb 16 13:18:15.017: INFO: (13) /api/v1/namespaces/proxy-306/services/https:proxy-service-kcd96:tlsportname2/proxy/: tls qux (200; 25.298315ms)
Feb 16 13:18:15.017: INFO: (13) /api/v1/namespaces/proxy-306/services/proxy-service-kcd96:portname1/proxy/: foo (200; 25.427643ms)
Feb 16 13:18:15.017: INFO: (13) /api/v1/namespaces/proxy-306/pods/proxy-service-kcd96-sk7v2/proxy/: test (200; 25.483776ms)
Feb 16 13:18:15.017: INFO: (13) /api/v1/namespaces/proxy-306/services/http:proxy-service-kcd96:portname1/proxy/: foo (200; 25.49527ms)
Feb 16 13:18:15.017: INFO: (13) /api/v1/namespaces/proxy-306/pods/http:proxy-service-kcd96-sk7v2:162/proxy/: bar (200; 25.520125ms)
Feb 16 13:18:15.017: INFO: (13) /api/v1/namespaces/proxy-306/pods/https:proxy-service-kcd96-sk7v2:443/proxy/: test (200; 6.361597ms)
Feb 16 13:18:15.025: INFO: (14) /api/v1/namespaces/proxy-306/pods/http:proxy-service-kcd96-sk7v2:160/proxy/: foo (200; 6.932979ms)
Feb 16 13:18:15.025: INFO: (14) /api/v1/namespaces/proxy-306/pods/http:proxy-service-kcd96-sk7v2:162/proxy/: bar (200; 7.146912ms)
Feb 16 13:18:15.025: INFO: (14) /api/v1/namespaces/proxy-306/pods/proxy-service-kcd96-sk7v2:160/proxy/: foo (200; 7.246379ms)
Feb 16 13:18:15.026: INFO: (14) /api/v1/namespaces/proxy-306/pods/https:proxy-service-kcd96-sk7v2:443/proxy/: testt... (200; 9.24029ms)
Feb 16 13:18:15.029: INFO: (14) /api/v1/namespaces/proxy-306/services/http:proxy-service-kcd96:portname1/proxy/: foo (200; 11.254811ms)
Feb 16 13:18:15.031: INFO: (14) /api/v1/namespaces/proxy-306/services/https:proxy-service-kcd96:tlsportname1/proxy/: tls baz (200; 12.833374ms)
Feb 16 13:18:15.031: INFO: (14) /api/v1/namespaces/proxy-306/services/https:proxy-service-kcd96:tlsportname2/proxy/: tls qux (200; 13.411236ms)
Feb 16 13:18:15.033: INFO: (14) /api/v1/namespaces/proxy-306/services/http:proxy-service-kcd96:portname2/proxy/: bar (200; 14.764265ms)
Feb 16 13:18:15.034: INFO: (14) /api/v1/namespaces/proxy-306/services/proxy-service-kcd96:portname1/proxy/: foo (200; 16.562346ms)
Feb 16 13:18:15.034: INFO: (14) /api/v1/namespaces/proxy-306/services/proxy-service-kcd96:portname2/proxy/: bar (200; 16.621681ms)
Feb 16 13:18:15.047: INFO: (15) /api/v1/namespaces/proxy-306/pods/http:proxy-service-kcd96-sk7v2:162/proxy/: bar (200; 12.382902ms)
Feb 16 13:18:15.047: INFO: (15) /api/v1/namespaces/proxy-306/pods/proxy-service-kcd96-sk7v2:160/proxy/: foo (200; 12.542721ms)
Feb 16 13:18:15.048: INFO: (15) /api/v1/namespaces/proxy-306/pods/proxy-service-kcd96-sk7v2:162/proxy/: bar (200; 13.060847ms)
Feb 16 13:18:15.050: INFO: (15) /api/v1/namespaces/proxy-306/pods/https:proxy-service-kcd96-sk7v2:443/proxy/: testt... (200; 15.728354ms)
Feb 16 13:18:15.051: INFO: (15) /api/v1/namespaces/proxy-306/services/proxy-service-kcd96:portname2/proxy/: bar (200; 15.691766ms)
Feb 16 13:18:15.051: INFO: (15) /api/v1/namespaces/proxy-306/pods/https:proxy-service-kcd96-sk7v2:462/proxy/: tls qux (200; 15.965565ms)
Feb 16 13:18:15.051: INFO: (15) /api/v1/namespaces/proxy-306/pods/http:proxy-service-kcd96-sk7v2:160/proxy/: foo (200; 15.745116ms)
Feb 16 13:18:15.051: INFO: (15) /api/v1/namespaces/proxy-306/pods/proxy-service-kcd96-sk7v2/proxy/: test (200; 15.794829ms)
Feb 16 13:18:15.052: INFO: (15) /api/v1/namespaces/proxy-306/services/http:proxy-service-kcd96:portname2/proxy/: bar (200; 16.885623ms)
Feb 16 13:18:15.052: INFO: (15) /api/v1/namespaces/proxy-306/services/http:proxy-service-kcd96:portname1/proxy/: foo (200; 17.280787ms)
Feb 16 13:18:15.052: INFO: (15) /api/v1/namespaces/proxy-306/services/proxy-service-kcd96:portname1/proxy/: foo (200; 17.399631ms)
Feb 16 13:18:15.053: INFO: (15) /api/v1/namespaces/proxy-306/services/https:proxy-service-kcd96:tlsportname1/proxy/: tls baz (200; 17.918652ms)
Feb 16 13:18:15.053: INFO: (15) /api/v1/namespaces/proxy-306/services/https:proxy-service-kcd96:tlsportname2/proxy/: tls qux (200; 17.769158ms)
Feb 16 13:18:15.070: INFO: (16) /api/v1/namespaces/proxy-306/services/proxy-service-kcd96:portname1/proxy/: foo (200; 16.796103ms)
Feb 16 13:18:15.070: INFO: (16) /api/v1/namespaces/proxy-306/pods/proxy-service-kcd96-sk7v2/proxy/: test (200; 17.478305ms)
Feb 16 13:18:15.070: INFO: (16) /api/v1/namespaces/proxy-306/services/proxy-service-kcd96:portname2/proxy/: bar (200; 17.518545ms)
Feb 16 13:18:15.071: INFO: (16) /api/v1/namespaces/proxy-306/services/http:proxy-service-kcd96:portname1/proxy/: foo (200; 18.528397ms)
Feb 16 13:18:15.071: INFO: (16) /api/v1/namespaces/proxy-306/pods/https:proxy-service-kcd96-sk7v2:460/proxy/: tls baz (200; 18.437828ms)
Feb 16 13:18:15.071: INFO: (16) /api/v1/namespaces/proxy-306/services/https:proxy-service-kcd96:tlsportname1/proxy/: tls baz (200; 18.637617ms)
Feb 16 13:18:15.071: INFO: (16) /api/v1/namespaces/proxy-306/pods/https:proxy-service-kcd96-sk7v2:443/proxy/: testt... (200; 19.217403ms)
Feb 16 13:18:15.076: INFO: (16) /api/v1/namespaces/proxy-306/pods/http:proxy-service-kcd96-sk7v2:160/proxy/: foo (200; 23.007344ms)
Feb 16 13:18:15.076: INFO: (16) /api/v1/namespaces/proxy-306/pods/http:proxy-service-kcd96-sk7v2:162/proxy/: bar (200; 23.395891ms)
Feb 16 13:18:15.083: INFO: (17) /api/v1/namespaces/proxy-306/pods/proxy-service-kcd96-sk7v2:162/proxy/: bar (200; 5.840404ms)
Feb 16 13:18:15.083: INFO: (17) /api/v1/namespaces/proxy-306/pods/https:proxy-service-kcd96-sk7v2:460/proxy/: tls baz (200; 5.943944ms)
Feb 16 13:18:15.083: INFO: (17) /api/v1/namespaces/proxy-306/pods/proxy-service-kcd96-sk7v2/proxy/: test (200; 6.019262ms)
Feb 16 13:18:15.085: INFO: (17) /api/v1/namespaces/proxy-306/pods/https:proxy-service-kcd96-sk7v2:462/proxy/: tls qux (200; 7.83034ms)
Feb 16 13:18:15.085: INFO: (17) /api/v1/namespaces/proxy-306/pods/http:proxy-service-kcd96-sk7v2:160/proxy/: foo (200; 8.200548ms)
Feb 16 13:18:15.085: INFO: (17) /api/v1/namespaces/proxy-306/pods/proxy-service-kcd96-sk7v2:160/proxy/: foo (200; 8.454239ms)
Feb 16 13:18:15.085: INFO: (17) /api/v1/namespaces/proxy-306/pods/proxy-service-kcd96-sk7v2:1080/proxy/: testt... (200; 8.023114ms)
Feb 16 13:18:15.087: INFO: (17) /api/v1/namespaces/proxy-306/services/proxy-service-kcd96:portname1/proxy/: foo (200; 10.411515ms)
Feb 16 13:18:15.087: INFO: (17) /api/v1/namespaces/proxy-306/services/https:proxy-service-kcd96:tlsportname1/proxy/: tls baz (200; 10.466442ms)
Feb 16 13:18:15.087: INFO: (17) /api/v1/namespaces/proxy-306/services/https:proxy-service-kcd96:tlsportname2/proxy/: tls qux (200; 10.890496ms)
Feb 16 13:18:15.087: INFO: (17) /api/v1/namespaces/proxy-306/pods/https:proxy-service-kcd96-sk7v2:443/proxy/: testt... (200; 8.484963ms)
Feb 16 13:18:15.097: INFO: (18) /api/v1/namespaces/proxy-306/services/proxy-service-kcd96:portname2/proxy/: bar (200; 8.577637ms)
Feb 16 13:18:15.097: INFO: (18) /api/v1/namespaces/proxy-306/pods/http:proxy-service-kcd96-sk7v2:160/proxy/: foo (200; 9.171334ms)
Feb 16 13:18:15.097: INFO: (18) /api/v1/namespaces/proxy-306/pods/https:proxy-service-kcd96-sk7v2:462/proxy/: tls qux (200; 9.372032ms)
Feb 16 13:18:15.097: INFO: (18) /api/v1/namespaces/proxy-306/pods/http:proxy-service-kcd96-sk7v2:162/proxy/: bar (200; 9.372475ms)
Feb 16 13:18:15.098: INFO: (18) /api/v1/namespaces/proxy-306/services/proxy-service-kcd96:portname1/proxy/: foo (200; 10.44078ms)
Feb 16 13:18:15.099: INFO: (18) /api/v1/namespaces/proxy-306/services/https:proxy-service-kcd96:tlsportname2/proxy/: tls qux (200; 10.605627ms)
Feb 16 13:18:15.099: INFO: (18) /api/v1/namespaces/proxy-306/pods/https:proxy-service-kcd96-sk7v2:443/proxy/: test (200; 11.763072ms)
Feb 16 13:18:15.100: INFO: (18) /api/v1/namespaces/proxy-306/services/http:proxy-service-kcd96:portname2/proxy/: bar (200; 11.737214ms)
Feb 16 13:18:15.108: INFO: (19) /api/v1/namespaces/proxy-306/pods/https:proxy-service-kcd96-sk7v2:460/proxy/: tls baz (200; 8.774203ms)
Feb 16 13:18:15.109: INFO: (19) /api/v1/namespaces/proxy-306/pods/proxy-service-kcd96-sk7v2:1080/proxy/: testt... (200; 10.126234ms)
Feb 16 13:18:15.110: INFO: (19) /api/v1/namespaces/proxy-306/pods/http:proxy-service-kcd96-sk7v2:160/proxy/: foo (200; 10.47305ms)
Feb 16 13:18:15.111: INFO: (19) /api/v1/namespaces/proxy-306/pods/proxy-service-kcd96-sk7v2:162/proxy/: bar (200; 10.840383ms)
Feb 16 13:18:15.111: INFO: (19) /api/v1/namespaces/proxy-306/pods/proxy-service-kcd96-sk7v2:160/proxy/: foo (200; 11.020292ms)
Feb 16 13:18:15.111: INFO: (19) /api/v1/namespaces/proxy-306/pods/https:proxy-service-kcd96-sk7v2:462/proxy/: tls qux (200; 11.16703ms)
Feb 16 13:18:15.111: INFO: (19) /api/v1/namespaces/proxy-306/pods/https:proxy-service-kcd96-sk7v2:443/proxy/: test (200; 11.378768ms)
Feb 16 13:18:15.112: INFO: (19) /api/v1/namespaces/proxy-306/services/http:proxy-service-kcd96:portname2/proxy/: bar (200; 12.238785ms)
Feb 16 13:18:15.113: INFO: (19) /api/v1/namespaces/proxy-306/services/https:proxy-service-kcd96:tlsportname2/proxy/: tls qux (200; 13.039434ms)
Feb 16 13:18:15.113: INFO: (19) /api/v1/namespaces/proxy-306/services/https:proxy-service-kcd96:tlsportname1/proxy/: tls baz (200; 12.994966ms)
Feb 16 13:18:15.113: INFO: (19) /api/v1/namespaces/proxy-306/services/http:proxy-service-kcd96:portname1/proxy/: foo (200; 13.412519ms)
Feb 16 13:18:15.114: INFO: (19) /api/v1/namespaces/proxy-306/services/proxy-service-kcd96:portname2/proxy/: bar (200; 13.889144ms)
STEP: deleting ReplicationController proxy-service-kcd96 in namespace proxy-306, will wait for the garbage collector to delete the pods
Feb 16 13:18:15.176: INFO: Deleting ReplicationController proxy-service-kcd96 took: 8.805602ms
Feb 16 13:18:15.476: INFO: Terminating ReplicationController proxy-service-kcd96 pods took: 300.591275ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:18:26.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-306" for this suite.
Feb 16 13:18:32.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:18:32.743: INFO: namespace proxy-306 deletion completed in 6.146618428s

• [SLOW TEST:28.449 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:18:32.743: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 16 13:18:55.252: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 16 13:18:55.256: INFO: Pod pod-with-prestop-http-hook still exists
Feb 16 13:18:57.256: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 16 13:18:57.301: INFO: Pod pod-with-prestop-http-hook still exists
Feb 16 13:18:59.257: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 16 13:18:59.265: INFO: Pod pod-with-prestop-http-hook still exists
Feb 16 13:19:01.257: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 16 13:19:01.272: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:19:01.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9823" for this suite.
Feb 16 13:19:23.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:19:23.435: INFO: namespace container-lifecycle-hook-9823 deletion completed in 22.124898631s

• [SLOW TEST:50.692 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:19:23.436: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 16 13:19:23.547: INFO: Waiting up to 5m0s for pod "pod-8328e3ad-c44f-4c1b-b393-fc5bb6ea0aa2" in namespace "emptydir-7766" to be "success or failure"
Feb 16 13:19:23.566: INFO: Pod "pod-8328e3ad-c44f-4c1b-b393-fc5bb6ea0aa2": Phase="Pending", Reason="", readiness=false. Elapsed: 19.153259ms
Feb 16 13:19:25.575: INFO: Pod "pod-8328e3ad-c44f-4c1b-b393-fc5bb6ea0aa2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028241941s
Feb 16 13:19:28.502: INFO: Pod "pod-8328e3ad-c44f-4c1b-b393-fc5bb6ea0aa2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.95442668s
Feb 16 13:19:30.517: INFO: Pod "pod-8328e3ad-c44f-4c1b-b393-fc5bb6ea0aa2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.969567839s
Feb 16 13:19:32.534: INFO: Pod "pod-8328e3ad-c44f-4c1b-b393-fc5bb6ea0aa2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.986697292s
STEP: Saw pod success
Feb 16 13:19:32.534: INFO: Pod "pod-8328e3ad-c44f-4c1b-b393-fc5bb6ea0aa2" satisfied condition "success or failure"
Feb 16 13:19:32.547: INFO: Trying to get logs from node iruya-node pod pod-8328e3ad-c44f-4c1b-b393-fc5bb6ea0aa2 container test-container: 
STEP: delete the pod
Feb 16 13:19:32.734: INFO: Waiting for pod pod-8328e3ad-c44f-4c1b-b393-fc5bb6ea0aa2 to disappear
Feb 16 13:19:32.752: INFO: Pod pod-8328e3ad-c44f-4c1b-b393-fc5bb6ea0aa2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:19:32.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7766" for this suite.
Feb 16 13:19:38.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:19:38.999: INFO: namespace emptydir-7766 deletion completed in 6.239826183s

• [SLOW TEST:15.563 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:19:39.000: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-6f048ef3-d748-4eaf-8c47-104084b8eca1
STEP: Creating a pod to test consume configMaps
Feb 16 13:19:39.169: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-64646e26-1c5a-4b29-a2cf-7ce8bd0562a5" in namespace "projected-1578" to be "success or failure"
Feb 16 13:19:39.194: INFO: Pod "pod-projected-configmaps-64646e26-1c5a-4b29-a2cf-7ce8bd0562a5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.594291ms
Feb 16 13:19:41.199: INFO: Pod "pod-projected-configmaps-64646e26-1c5a-4b29-a2cf-7ce8bd0562a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030408387s
Feb 16 13:19:43.497: INFO: Pod "pod-projected-configmaps-64646e26-1c5a-4b29-a2cf-7ce8bd0562a5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328171525s
Feb 16 13:19:45.508: INFO: Pod "pod-projected-configmaps-64646e26-1c5a-4b29-a2cf-7ce8bd0562a5": Phase="Running", Reason="", readiness=true. Elapsed: 6.338537618s
Feb 16 13:19:47.522: INFO: Pod "pod-projected-configmaps-64646e26-1c5a-4b29-a2cf-7ce8bd0562a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.352813452s
STEP: Saw pod success
Feb 16 13:19:47.522: INFO: Pod "pod-projected-configmaps-64646e26-1c5a-4b29-a2cf-7ce8bd0562a5" satisfied condition "success or failure"
Feb 16 13:19:47.529: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-64646e26-1c5a-4b29-a2cf-7ce8bd0562a5 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 16 13:19:47.663: INFO: Waiting for pod pod-projected-configmaps-64646e26-1c5a-4b29-a2cf-7ce8bd0562a5 to disappear
Feb 16 13:19:47.677: INFO: Pod pod-projected-configmaps-64646e26-1c5a-4b29-a2cf-7ce8bd0562a5 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:19:47.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1578" for this suite.
Feb 16 13:19:54.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:19:54.561: INFO: namespace projected-1578 deletion completed in 6.870110219s

• [SLOW TEST:15.561 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:19:54.561: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 16 13:19:54.710: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7406b152-e6fb-48aa-ac73-92f581eba8b1" in namespace "projected-2148" to be "success or failure"
Feb 16 13:19:54.724: INFO: Pod "downwardapi-volume-7406b152-e6fb-48aa-ac73-92f581eba8b1": Phase="Pending", Reason="", readiness=false. Elapsed: 14.508291ms
Feb 16 13:19:56.750: INFO: Pod "downwardapi-volume-7406b152-e6fb-48aa-ac73-92f581eba8b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03988908s
Feb 16 13:19:58.757: INFO: Pod "downwardapi-volume-7406b152-e6fb-48aa-ac73-92f581eba8b1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047208867s
Feb 16 13:20:00.768: INFO: Pod "downwardapi-volume-7406b152-e6fb-48aa-ac73-92f581eba8b1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058603644s
Feb 16 13:20:02.802: INFO: Pod "downwardapi-volume-7406b152-e6fb-48aa-ac73-92f581eba8b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.091921757s
STEP: Saw pod success
Feb 16 13:20:02.802: INFO: Pod "downwardapi-volume-7406b152-e6fb-48aa-ac73-92f581eba8b1" satisfied condition "success or failure"
Feb 16 13:20:02.805: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-7406b152-e6fb-48aa-ac73-92f581eba8b1 container client-container: 
STEP: delete the pod
Feb 16 13:20:02.958: INFO: Waiting for pod downwardapi-volume-7406b152-e6fb-48aa-ac73-92f581eba8b1 to disappear
Feb 16 13:20:02.979: INFO: Pod downwardapi-volume-7406b152-e6fb-48aa-ac73-92f581eba8b1 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:20:02.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2148" for this suite.
Feb 16 13:20:09.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:20:09.210: INFO: namespace projected-2148 deletion completed in 6.166695889s

• [SLOW TEST:14.649 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:20:09.210: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-438013df-3be8-44f2-9399-dfea6d45ddff
STEP: Creating a pod to test consume secrets
Feb 16 13:20:09.316: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7342e9fd-1767-4309-acd7-8ac84efedaf1" in namespace "projected-1669" to be "success or failure"
Feb 16 13:20:09.359: INFO: Pod "pod-projected-secrets-7342e9fd-1767-4309-acd7-8ac84efedaf1": Phase="Pending", Reason="", readiness=false. Elapsed: 42.624697ms
Feb 16 13:20:11.375: INFO: Pod "pod-projected-secrets-7342e9fd-1767-4309-acd7-8ac84efedaf1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058947347s
Feb 16 13:20:13.389: INFO: Pod "pod-projected-secrets-7342e9fd-1767-4309-acd7-8ac84efedaf1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072426837s
Feb 16 13:20:15.822: INFO: Pod "pod-projected-secrets-7342e9fd-1767-4309-acd7-8ac84efedaf1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.505111612s
Feb 16 13:20:17.831: INFO: Pod "pod-projected-secrets-7342e9fd-1767-4309-acd7-8ac84efedaf1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.514648034s
STEP: Saw pod success
Feb 16 13:20:17.831: INFO: Pod "pod-projected-secrets-7342e9fd-1767-4309-acd7-8ac84efedaf1" satisfied condition "success or failure"
Feb 16 13:20:17.834: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-7342e9fd-1767-4309-acd7-8ac84efedaf1 container projected-secret-volume-test: 
STEP: delete the pod
Feb 16 13:20:17.989: INFO: Waiting for pod pod-projected-secrets-7342e9fd-1767-4309-acd7-8ac84efedaf1 to disappear
Feb 16 13:20:18.003: INFO: Pod pod-projected-secrets-7342e9fd-1767-4309-acd7-8ac84efedaf1 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:20:18.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1669" for this suite.
Feb 16 13:20:24.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:20:24.128: INFO: namespace projected-1669 deletion completed in 6.118392367s

• [SLOW TEST:14.918 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:20:24.129: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-2841
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 16 13:20:24.205: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 16 13:21:02.397: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2841 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 16 13:21:02.397: INFO: >>> kubeConfig: /root/.kube/config
I0216 13:21:02.463242       8 log.go:172] (0xc0013304d0) (0xc00138b720) Create stream
I0216 13:21:02.463289       8 log.go:172] (0xc0013304d0) (0xc00138b720) Stream added, broadcasting: 1
I0216 13:21:02.478321       8 log.go:172] (0xc0013304d0) Reply frame received for 1
I0216 13:21:02.478416       8 log.go:172] (0xc0013304d0) (0xc001bc8d20) Create stream
I0216 13:21:02.478423       8 log.go:172] (0xc0013304d0) (0xc001bc8d20) Stream added, broadcasting: 3
I0216 13:21:02.480735       8 log.go:172] (0xc0013304d0) Reply frame received for 3
I0216 13:21:02.480757       8 log.go:172] (0xc0013304d0) (0xc000c21680) Create stream
I0216 13:21:02.480769       8 log.go:172] (0xc0013304d0) (0xc000c21680) Stream added, broadcasting: 5
I0216 13:21:02.482114       8 log.go:172] (0xc0013304d0) Reply frame received for 5
I0216 13:21:03.691298       8 log.go:172] (0xc0013304d0) Data frame received for 3
I0216 13:21:03.691427       8 log.go:172] (0xc001bc8d20) (3) Data frame handling
I0216 13:21:03.691502       8 log.go:172] (0xc001bc8d20) (3) Data frame sent
I0216 13:21:03.882944       8 log.go:172] (0xc0013304d0) Data frame received for 1
I0216 13:21:03.883146       8 log.go:172] (0xc0013304d0) (0xc000c21680) Stream removed, broadcasting: 5
I0216 13:21:03.883220       8 log.go:172] (0xc00138b720) (1) Data frame handling
I0216 13:21:03.883296       8 log.go:172] (0xc00138b720) (1) Data frame sent
I0216 13:21:03.883588       8 log.go:172] (0xc0013304d0) (0xc001bc8d20) Stream removed, broadcasting: 3
I0216 13:21:03.883619       8 log.go:172] (0xc0013304d0) (0xc00138b720) Stream removed, broadcasting: 1
I0216 13:21:03.883636       8 log.go:172] (0xc0013304d0) Go away received
I0216 13:21:03.884391       8 log.go:172] (0xc0013304d0) (0xc00138b720) Stream removed, broadcasting: 1
I0216 13:21:03.884413       8 log.go:172] (0xc0013304d0) (0xc001bc8d20) Stream removed, broadcasting: 3
I0216 13:21:03.884613       8 log.go:172] (0xc0013304d0) (0xc000c21680) Stream removed, broadcasting: 5
Feb 16 13:21:03.884: INFO: Found all expected endpoints: [netserver-0]
Feb 16 13:21:03.896: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2841 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 16 13:21:03.896: INFO: >>> kubeConfig: /root/.kube/config
I0216 13:21:04.002285       8 log.go:172] (0xc001a7cdc0) (0xc000c21c20) Create stream
I0216 13:21:04.002401       8 log.go:172] (0xc001a7cdc0) (0xc000c21c20) Stream added, broadcasting: 1
I0216 13:21:04.013379       8 log.go:172] (0xc001a7cdc0) Reply frame received for 1
I0216 13:21:04.013411       8 log.go:172] (0xc001a7cdc0) (0xc001bc8e60) Create stream
I0216 13:21:04.013417       8 log.go:172] (0xc001a7cdc0) (0xc001bc8e60) Stream added, broadcasting: 3
I0216 13:21:04.015300       8 log.go:172] (0xc001a7cdc0) Reply frame received for 3
I0216 13:21:04.015314       8 log.go:172] (0xc001a7cdc0) (0xc00138b900) Create stream
I0216 13:21:04.015320       8 log.go:172] (0xc001a7cdc0) (0xc00138b900) Stream added, broadcasting: 5
I0216 13:21:04.023367       8 log.go:172] (0xc001a7cdc0) Reply frame received for 5
I0216 13:21:05.243338       8 log.go:172] (0xc001a7cdc0) Data frame received for 3
I0216 13:21:05.243377       8 log.go:172] (0xc001bc8e60) (3) Data frame handling
I0216 13:21:05.243395       8 log.go:172] (0xc001bc8e60) (3) Data frame sent
I0216 13:21:05.485493       8 log.go:172] (0xc001a7cdc0) Data frame received for 1
I0216 13:21:05.485611       8 log.go:172] (0xc001a7cdc0) (0xc001bc8e60) Stream removed, broadcasting: 3
I0216 13:21:05.485689       8 log.go:172] (0xc000c21c20) (1) Data frame handling
I0216 13:21:05.485730       8 log.go:172] (0xc000c21c20) (1) Data frame sent
I0216 13:21:05.485791       8 log.go:172] (0xc001a7cdc0) (0xc00138b900) Stream removed, broadcasting: 5
I0216 13:21:05.485884       8 log.go:172] (0xc001a7cdc0) (0xc000c21c20) Stream removed, broadcasting: 1
I0216 13:21:05.485936       8 log.go:172] (0xc001a7cdc0) Go away received
I0216 13:21:05.486323       8 log.go:172] (0xc001a7cdc0) (0xc000c21c20) Stream removed, broadcasting: 1
I0216 13:21:05.486384       8 log.go:172] (0xc001a7cdc0) (0xc001bc8e60) Stream removed, broadcasting: 3
I0216 13:21:05.486430       8 log.go:172] (0xc001a7cdc0) (0xc00138b900) Stream removed, broadcasting: 5
Feb 16 13:21:05.486: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:21:05.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-2841" for this suite.
Feb 16 13:21:29.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:21:29.711: INFO: namespace pod-network-test-2841 deletion completed in 24.208986479s

• [SLOW TEST:65.582 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:21:29.711: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb 16 13:21:29.875: INFO: Waiting up to 5m0s for pod "pod-36c227ba-9c60-40f7-9f66-837a7943f9ef" in namespace "emptydir-7967" to be "success or failure"
Feb 16 13:21:29.898: INFO: Pod "pod-36c227ba-9c60-40f7-9f66-837a7943f9ef": Phase="Pending", Reason="", readiness=false. Elapsed: 22.91064ms
Feb 16 13:21:31.922: INFO: Pod "pod-36c227ba-9c60-40f7-9f66-837a7943f9ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046232732s
Feb 16 13:21:33.940: INFO: Pod "pod-36c227ba-9c60-40f7-9f66-837a7943f9ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06463821s
Feb 16 13:21:35.953: INFO: Pod "pod-36c227ba-9c60-40f7-9f66-837a7943f9ef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077327902s
Feb 16 13:21:38.002: INFO: Pod "pod-36c227ba-9c60-40f7-9f66-837a7943f9ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.126326348s
STEP: Saw pod success
Feb 16 13:21:38.002: INFO: Pod "pod-36c227ba-9c60-40f7-9f66-837a7943f9ef" satisfied condition "success or failure"
Feb 16 13:21:38.009: INFO: Trying to get logs from node iruya-node pod pod-36c227ba-9c60-40f7-9f66-837a7943f9ef container test-container: 
STEP: delete the pod
Feb 16 13:21:38.076: INFO: Waiting for pod pod-36c227ba-9c60-40f7-9f66-837a7943f9ef to disappear
Feb 16 13:21:38.084: INFO: Pod pod-36c227ba-9c60-40f7-9f66-837a7943f9ef no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:21:38.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7967" for this suite.
Feb 16 13:21:44.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:21:44.262: INFO: namespace emptydir-7967 deletion completed in 6.171698986s

• [SLOW TEST:14.551 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:21:44.263: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-2f1ba5a8-e88d-46a2-b1ad-865d8d10ffc7
STEP: Creating a pod to test consume configMaps
Feb 16 13:21:44.405: INFO: Waiting up to 5m0s for pod "pod-configmaps-b2263bde-981e-4a96-8f55-00a06e48ba8a" in namespace "configmap-4295" to be "success or failure"
Feb 16 13:21:44.410: INFO: Pod "pod-configmaps-b2263bde-981e-4a96-8f55-00a06e48ba8a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.565633ms
Feb 16 13:21:46.418: INFO: Pod "pod-configmaps-b2263bde-981e-4a96-8f55-00a06e48ba8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013735803s
Feb 16 13:21:48.425: INFO: Pod "pod-configmaps-b2263bde-981e-4a96-8f55-00a06e48ba8a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020148658s
Feb 16 13:21:50.440: INFO: Pod "pod-configmaps-b2263bde-981e-4a96-8f55-00a06e48ba8a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035344588s
Feb 16 13:21:52.452: INFO: Pod "pod-configmaps-b2263bde-981e-4a96-8f55-00a06e48ba8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.047253236s
STEP: Saw pod success
Feb 16 13:21:52.452: INFO: Pod "pod-configmaps-b2263bde-981e-4a96-8f55-00a06e48ba8a" satisfied condition "success or failure"
Feb 16 13:21:52.459: INFO: Trying to get logs from node iruya-node pod pod-configmaps-b2263bde-981e-4a96-8f55-00a06e48ba8a container configmap-volume-test: 
STEP: delete the pod
Feb 16 13:21:52.599: INFO: Waiting for pod pod-configmaps-b2263bde-981e-4a96-8f55-00a06e48ba8a to disappear
Feb 16 13:21:52.613: INFO: Pod pod-configmaps-b2263bde-981e-4a96-8f55-00a06e48ba8a no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:21:52.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4295" for this suite.
Feb 16 13:21:58.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:21:58.738: INFO: namespace configmap-4295 deletion completed in 6.119334838s

• [SLOW TEST:14.475 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:21:58.738: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Feb 16 13:22:08.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-3426f2c5-adbb-40d5-bcec-20189eb22a1b -c busybox-main-container --namespace=emptydir-5882 -- cat /usr/share/volumeshare/shareddata.txt'
Feb 16 13:22:09.448: INFO: stderr: "I0216 13:22:09.209010     219 log.go:172] (0xc00086a4d0) (0xc0006a4d20) Create stream\nI0216 13:22:09.209093     219 log.go:172] (0xc00086a4d0) (0xc0006a4d20) Stream added, broadcasting: 1\nI0216 13:22:09.217191     219 log.go:172] (0xc00086a4d0) Reply frame received for 1\nI0216 13:22:09.217224     219 log.go:172] (0xc00086a4d0) (0xc0007920a0) Create stream\nI0216 13:22:09.217238     219 log.go:172] (0xc00086a4d0) (0xc0007920a0) Stream added, broadcasting: 3\nI0216 13:22:09.220510     219 log.go:172] (0xc00086a4d0) Reply frame received for 3\nI0216 13:22:09.220556     219 log.go:172] (0xc00086a4d0) (0xc000828000) Create stream\nI0216 13:22:09.220575     219 log.go:172] (0xc00086a4d0) (0xc000828000) Stream added, broadcasting: 5\nI0216 13:22:09.221804     219 log.go:172] (0xc00086a4d0) Reply frame received for 5\nI0216 13:22:09.311709     219 log.go:172] (0xc00086a4d0) Data frame received for 3\nI0216 13:22:09.311753     219 log.go:172] (0xc0007920a0) (3) Data frame handling\nI0216 13:22:09.311768     219 log.go:172] (0xc0007920a0) (3) Data frame sent\nI0216 13:22:09.441080     219 log.go:172] (0xc00086a4d0) Data frame received for 1\nI0216 13:22:09.441148     219 log.go:172] (0xc00086a4d0) (0xc0007920a0) Stream removed, broadcasting: 3\nI0216 13:22:09.441174     219 log.go:172] (0xc0006a4d20) (1) Data frame handling\nI0216 13:22:09.441191     219 log.go:172] (0xc0006a4d20) (1) Data frame sent\nI0216 13:22:09.441200     219 log.go:172] (0xc00086a4d0) (0xc0006a4d20) Stream removed, broadcasting: 1\nI0216 13:22:09.441212     219 log.go:172] (0xc00086a4d0) (0xc000828000) Stream removed, broadcasting: 5\nI0216 13:22:09.441230     219 log.go:172] (0xc00086a4d0) Go away received\nI0216 13:22:09.441641     219 log.go:172] (0xc00086a4d0) (0xc0006a4d20) Stream removed, broadcasting: 1\nI0216 13:22:09.441662     219 log.go:172] (0xc00086a4d0) (0xc0007920a0) Stream removed, broadcasting: 3\nI0216 13:22:09.441673     219 log.go:172] (0xc00086a4d0) (0xc000828000) Stream removed, broadcasting: 5\n"
Feb 16 13:22:09.448: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:22:09.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5882" for this suite.
Feb 16 13:22:15.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:22:15.610: INFO: namespace emptydir-5882 deletion completed in 6.153161807s

• [SLOW TEST:16.872 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:22:15.610: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb 16 13:22:15.718: INFO: Waiting up to 5m0s for pod "downward-api-b3558ab3-f0ed-43c8-aa4a-44385612f14a" in namespace "downward-api-6746" to be "success or failure"
Feb 16 13:22:15.729: INFO: Pod "downward-api-b3558ab3-f0ed-43c8-aa4a-44385612f14a": Phase="Pending", Reason="", readiness=false. Elapsed: 11.256707ms
Feb 16 13:22:17.740: INFO: Pod "downward-api-b3558ab3-f0ed-43c8-aa4a-44385612f14a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021616996s
Feb 16 13:22:19.756: INFO: Pod "downward-api-b3558ab3-f0ed-43c8-aa4a-44385612f14a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037593899s
Feb 16 13:22:21.764: INFO: Pod "downward-api-b3558ab3-f0ed-43c8-aa4a-44385612f14a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04578346s
Feb 16 13:22:23.777: INFO: Pod "downward-api-b3558ab3-f0ed-43c8-aa4a-44385612f14a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05918062s
Feb 16 13:22:25.789: INFO: Pod "downward-api-b3558ab3-f0ed-43c8-aa4a-44385612f14a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.071226446s
STEP: Saw pod success
Feb 16 13:22:25.789: INFO: Pod "downward-api-b3558ab3-f0ed-43c8-aa4a-44385612f14a" satisfied condition "success or failure"
Feb 16 13:22:25.801: INFO: Trying to get logs from node iruya-node pod downward-api-b3558ab3-f0ed-43c8-aa4a-44385612f14a container dapi-container: 
STEP: delete the pod
Feb 16 13:22:25.900: INFO: Waiting for pod downward-api-b3558ab3-f0ed-43c8-aa4a-44385612f14a to disappear
Feb 16 13:22:25.914: INFO: Pod downward-api-b3558ab3-f0ed-43c8-aa4a-44385612f14a no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:22:25.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6746" for this suite.
Feb 16 13:22:31.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:22:32.122: INFO: namespace downward-api-6746 deletion completed in 6.197434681s

• [SLOW TEST:16.512 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:22:32.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 16 13:22:32.266: INFO: Creating ReplicaSet my-hostname-basic-df8cdb8b-9dfe-4784-8692-643a9059005d
Feb 16 13:22:32.347: INFO: Pod name my-hostname-basic-df8cdb8b-9dfe-4784-8692-643a9059005d: Found 0 pods out of 1
Feb 16 13:22:37.364: INFO: Pod name my-hostname-basic-df8cdb8b-9dfe-4784-8692-643a9059005d: Found 1 pods out of 1
Feb 16 13:22:37.364: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-df8cdb8b-9dfe-4784-8692-643a9059005d" is running
Feb 16 13:22:39.413: INFO: Pod "my-hostname-basic-df8cdb8b-9dfe-4784-8692-643a9059005d-lwqnf" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-16 13:22:32 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-16 13:22:32 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-df8cdb8b-9dfe-4784-8692-643a9059005d]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-16 13:22:32 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-df8cdb8b-9dfe-4784-8692-643a9059005d]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-16 13:22:32 +0000 UTC Reason: Message:}])
Feb 16 13:22:39.413: INFO: Trying to dial the pod
Feb 16 13:22:44.465: INFO: Controller my-hostname-basic-df8cdb8b-9dfe-4784-8692-643a9059005d: Got expected result from replica 1 [my-hostname-basic-df8cdb8b-9dfe-4784-8692-643a9059005d-lwqnf]: "my-hostname-basic-df8cdb8b-9dfe-4784-8692-643a9059005d-lwqnf", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:22:44.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-2350" for this suite.
Feb 16 13:22:50.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:22:50.622: INFO: namespace replicaset-2350 deletion completed in 6.132021945s

• [SLOW TEST:18.500 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:22:50.622: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-4927d46e-98c7-4540-8189-b27b24a9c72e
STEP: Creating a pod to test consume configMaps
Feb 16 13:22:50.803: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fbafddd9-b1e5-43e2-9c33-05bbbb42b28c" in namespace "projected-2715" to be "success or failure"
Feb 16 13:22:50.867: INFO: Pod "pod-projected-configmaps-fbafddd9-b1e5-43e2-9c33-05bbbb42b28c": Phase="Pending", Reason="", readiness=false. Elapsed: 63.411424ms
Feb 16 13:22:52.926: INFO: Pod "pod-projected-configmaps-fbafddd9-b1e5-43e2-9c33-05bbbb42b28c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122395064s
Feb 16 13:22:54.940: INFO: Pod "pod-projected-configmaps-fbafddd9-b1e5-43e2-9c33-05bbbb42b28c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137075026s
Feb 16 13:22:56.984: INFO: Pod "pod-projected-configmaps-fbafddd9-b1e5-43e2-9c33-05bbbb42b28c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.180547764s
Feb 16 13:22:58.993: INFO: Pod "pod-projected-configmaps-fbafddd9-b1e5-43e2-9c33-05bbbb42b28c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.189698771s
Feb 16 13:23:01.013: INFO: Pod "pod-projected-configmaps-fbafddd9-b1e5-43e2-9c33-05bbbb42b28c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.21014145s
STEP: Saw pod success
Feb 16 13:23:01.013: INFO: Pod "pod-projected-configmaps-fbafddd9-b1e5-43e2-9c33-05bbbb42b28c" satisfied condition "success or failure"
Feb 16 13:23:01.018: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-fbafddd9-b1e5-43e2-9c33-05bbbb42b28c container projected-configmap-volume-test: 
STEP: delete the pod
Feb 16 13:23:01.066: INFO: Waiting for pod pod-projected-configmaps-fbafddd9-b1e5-43e2-9c33-05bbbb42b28c to disappear
Feb 16 13:23:01.070: INFO: Pod pod-projected-configmaps-fbafddd9-b1e5-43e2-9c33-05bbbb42b28c no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:23:01.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2715" for this suite.
Feb 16 13:23:07.092: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:23:07.191: INFO: namespace projected-2715 deletion completed in 6.115796028s

• [SLOW TEST:16.569 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:23:07.192: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-4327014c-3d2b-4134-84f7-9ca31c50dfa2
STEP: Creating a pod to test consume configMaps
Feb 16 13:23:07.250: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b406c924-9551-42f4-a5e3-1970e784c07d" in namespace "projected-7919" to be "success or failure"
Feb 16 13:23:07.254: INFO: Pod "pod-projected-configmaps-b406c924-9551-42f4-a5e3-1970e784c07d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.230738ms
Feb 16 13:23:09.263: INFO: Pod "pod-projected-configmaps-b406c924-9551-42f4-a5e3-1970e784c07d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013071883s
Feb 16 13:23:11.270: INFO: Pod "pod-projected-configmaps-b406c924-9551-42f4-a5e3-1970e784c07d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019435131s
Feb 16 13:23:13.280: INFO: Pod "pod-projected-configmaps-b406c924-9551-42f4-a5e3-1970e784c07d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029471032s
Feb 16 13:23:15.285: INFO: Pod "pod-projected-configmaps-b406c924-9551-42f4-a5e3-1970e784c07d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.034955853s
Feb 16 13:23:17.296: INFO: Pod "pod-projected-configmaps-b406c924-9551-42f4-a5e3-1970e784c07d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.045375062s
STEP: Saw pod success
Feb 16 13:23:17.296: INFO: Pod "pod-projected-configmaps-b406c924-9551-42f4-a5e3-1970e784c07d" satisfied condition "success or failure"
Feb 16 13:23:17.301: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-b406c924-9551-42f4-a5e3-1970e784c07d container projected-configmap-volume-test: 
STEP: delete the pod
Feb 16 13:23:17.461: INFO: Waiting for pod pod-projected-configmaps-b406c924-9551-42f4-a5e3-1970e784c07d to disappear
Feb 16 13:23:17.473: INFO: Pod pod-projected-configmaps-b406c924-9551-42f4-a5e3-1970e784c07d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:23:17.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7919" for this suite.
Feb 16 13:23:23.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:23:23.649: INFO: namespace projected-7919 deletion completed in 6.168925458s

• [SLOW TEST:16.458 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:23:23.650: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 16 13:23:24.017: INFO: Waiting up to 5m0s for pod "pod-c779672a-b0e1-4967-987d-7ea6a42b6fca" in namespace "emptydir-9823" to be "success or failure"
Feb 16 13:23:24.029: INFO: Pod "pod-c779672a-b0e1-4967-987d-7ea6a42b6fca": Phase="Pending", Reason="", readiness=false. Elapsed: 11.679511ms
Feb 16 13:23:26.037: INFO: Pod "pod-c779672a-b0e1-4967-987d-7ea6a42b6fca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020225921s
Feb 16 13:23:28.363: INFO: Pod "pod-c779672a-b0e1-4967-987d-7ea6a42b6fca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.345626798s
Feb 16 13:23:30.378: INFO: Pod "pod-c779672a-b0e1-4967-987d-7ea6a42b6fca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.361390991s
Feb 16 13:23:32.388: INFO: Pod "pod-c779672a-b0e1-4967-987d-7ea6a42b6fca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.371230179s
STEP: Saw pod success
Feb 16 13:23:32.388: INFO: Pod "pod-c779672a-b0e1-4967-987d-7ea6a42b6fca" satisfied condition "success or failure"
Feb 16 13:23:32.392: INFO: Trying to get logs from node iruya-node pod pod-c779672a-b0e1-4967-987d-7ea6a42b6fca container test-container: 
STEP: delete the pod
Feb 16 13:23:32.602: INFO: Waiting for pod pod-c779672a-b0e1-4967-987d-7ea6a42b6fca to disappear
Feb 16 13:23:32.617: INFO: Pod pod-c779672a-b0e1-4967-987d-7ea6a42b6fca no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:23:32.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9823" for this suite.
Feb 16 13:23:38.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:23:38.789: INFO: namespace emptydir-9823 deletion completed in 6.162770143s

• [SLOW TEST:15.139 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:23:38.789: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-253
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-253
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-253
Feb 16 13:23:38.992: INFO: Found 0 stateful pods, waiting for 1
Feb 16 13:23:49.009: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Feb 16 13:23:49.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 16 13:23:49.759: INFO: stderr: "I0216 13:23:49.274521     238 log.go:172] (0xc0004f0420) (0xc0009e03c0) Create stream\nI0216 13:23:49.274645     238 log.go:172] (0xc0004f0420) (0xc0009e03c0) Stream added, broadcasting: 1\nI0216 13:23:49.290486     238 log.go:172] (0xc0004f0420) Reply frame received for 1\nI0216 13:23:49.290570     238 log.go:172] (0xc0004f0420) (0xc00003a0a0) Create stream\nI0216 13:23:49.290588     238 log.go:172] (0xc0004f0420) (0xc00003a0a0) Stream added, broadcasting: 3\nI0216 13:23:49.292205     238 log.go:172] (0xc0004f0420) Reply frame received for 3\nI0216 13:23:49.292223     238 log.go:172] (0xc0004f0420) (0xc0005665a0) Create stream\nI0216 13:23:49.292231     238 log.go:172] (0xc0004f0420) (0xc0005665a0) Stream added, broadcasting: 5\nI0216 13:23:49.293732     238 log.go:172] (0xc0004f0420) Reply frame received for 5\nI0216 13:23:49.546607     238 log.go:172] (0xc0004f0420) Data frame received for 5\nI0216 13:23:49.546672     238 log.go:172] (0xc0005665a0) (5) Data frame handling\nI0216 13:23:49.546700     238 log.go:172] (0xc0005665a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0216 13:23:49.630642     238 log.go:172] (0xc0004f0420) Data frame received for 3\nI0216 13:23:49.631211     238 log.go:172] (0xc00003a0a0) (3) Data frame handling\nI0216 13:23:49.631353     238 log.go:172] (0xc00003a0a0) (3) Data frame sent\nI0216 13:23:49.743060     238 log.go:172] (0xc0004f0420) (0xc00003a0a0) Stream removed, broadcasting: 3\nI0216 13:23:49.743320     238 log.go:172] (0xc0004f0420) Data frame received for 1\nI0216 13:23:49.743456     238 log.go:172] (0xc0004f0420) (0xc0005665a0) Stream removed, broadcasting: 5\nI0216 13:23:49.743536     238 log.go:172] (0xc0009e03c0) (1) Data frame handling\nI0216 13:23:49.743576     238 log.go:172] (0xc0009e03c0) (1) Data frame sent\nI0216 13:23:49.743611     238 log.go:172] (0xc0004f0420) (0xc0009e03c0) Stream removed, broadcasting: 1\nI0216 13:23:49.743652     238 log.go:172] (0xc0004f0420) Go away received\nI0216 13:23:49.745081     238 log.go:172] (0xc0004f0420) (0xc0009e03c0) Stream removed, broadcasting: 1\nI0216 13:23:49.745112     238 log.go:172] (0xc0004f0420) (0xc00003a0a0) Stream removed, broadcasting: 3\nI0216 13:23:49.745130     238 log.go:172] (0xc0004f0420) (0xc0005665a0) Stream removed, broadcasting: 5\n"
Feb 16 13:23:49.759: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 16 13:23:49.759: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 16 13:23:49.769: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb 16 13:23:59.783: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 16 13:23:59.783: INFO: Waiting for statefulset status.replicas updated to 0
Feb 16 13:23:59.833: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb 16 13:23:59.833: INFO: ss-0  iruya-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:39 +0000 UTC  }]
Feb 16 13:23:59.833: INFO: ss-1              Pending         []
Feb 16 13:23:59.833: INFO: 
Feb 16 13:23:59.833: INFO: StatefulSet ss has not reached scale 3, at 2
Feb 16 13:24:01.381: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.971844738s
Feb 16 13:24:03.031: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.423200556s
Feb 16 13:24:04.105: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.773777283s
Feb 16 13:24:05.112: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.69955554s
Feb 16 13:24:06.411: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.692492437s
Feb 16 13:24:07.567: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.393319414s
Feb 16 13:24:08.711: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.237658263s
Feb 16 13:24:09.728: INFO: Verifying statefulset ss doesn't scale past 3 for another 93.62661ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-253
Feb 16 13:24:10.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 13:24:11.274: INFO: stderr: "I0216 13:24:10.940504     261 log.go:172] (0xc000142790) (0xc0007425a0) Create stream\nI0216 13:24:10.940643     261 log.go:172] (0xc000142790) (0xc0007425a0) Stream added, broadcasting: 1\nI0216 13:24:10.948807     261 log.go:172] (0xc000142790) Reply frame received for 1\nI0216 13:24:10.948846     261 log.go:172] (0xc000142790) (0xc00065e3c0) Create stream\nI0216 13:24:10.948857     261 log.go:172] (0xc000142790) (0xc00065e3c0) Stream added, broadcasting: 3\nI0216 13:24:10.950219     261 log.go:172] (0xc000142790) Reply frame received for 3\nI0216 13:24:10.950257     261 log.go:172] (0xc000142790) (0xc00065e460) Create stream\nI0216 13:24:10.950270     261 log.go:172] (0xc000142790) (0xc00065e460) Stream added, broadcasting: 5\nI0216 13:24:10.951380     261 log.go:172] (0xc000142790) Reply frame received for 5\nI0216 13:24:11.069141     261 log.go:172] (0xc000142790) Data frame received for 5\nI0216 13:24:11.069202     261 log.go:172] (0xc00065e460) (5) Data frame handling\nI0216 13:24:11.069237     261 log.go:172] (0xc00065e460) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0216 13:24:11.069559     261 log.go:172] (0xc000142790) Data frame received for 3\nI0216 13:24:11.069569     261 log.go:172] (0xc00065e3c0) (3) Data frame handling\nI0216 13:24:11.069579     261 log.go:172] (0xc00065e3c0) (3) Data frame sent\nI0216 13:24:11.266655     261 log.go:172] (0xc000142790) (0xc00065e3c0) Stream removed, broadcasting: 3\nI0216 13:24:11.266925     261 log.go:172] (0xc000142790) Data frame received for 1\nI0216 13:24:11.267073     261 log.go:172] (0xc000142790) (0xc00065e460) Stream removed, broadcasting: 5\nI0216 13:24:11.267169     261 log.go:172] (0xc0007425a0) (1) Data frame handling\nI0216 13:24:11.267206     261 log.go:172] (0xc0007425a0) (1) Data frame sent\nI0216 13:24:11.267227     261 log.go:172] (0xc000142790) (0xc0007425a0) Stream removed, broadcasting: 1\nI0216 13:24:11.267246     261 log.go:172] (0xc000142790) Go away received\nI0216 13:24:11.267991     261 log.go:172] (0xc000142790) (0xc0007425a0) Stream removed, broadcasting: 1\nI0216 13:24:11.268017     261 log.go:172] (0xc000142790) (0xc00065e3c0) Stream removed, broadcasting: 3\nI0216 13:24:11.268027     261 log.go:172] (0xc000142790) (0xc00065e460) Stream removed, broadcasting: 5\n"
Feb 16 13:24:11.274: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 16 13:24:11.274: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 16 13:24:11.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 13:24:11.682: INFO: stderr: "I0216 13:24:11.454614     278 log.go:172] (0xc0008fe2c0) (0xc00084a6e0) Create stream\nI0216 13:24:11.454758     278 log.go:172] (0xc0008fe2c0) (0xc00084a6e0) Stream added, broadcasting: 1\nI0216 13:24:11.458369     278 log.go:172] (0xc0008fe2c0) Reply frame received for 1\nI0216 13:24:11.458400     278 log.go:172] (0xc0008fe2c0) (0xc000540140) Create stream\nI0216 13:24:11.458407     278 log.go:172] (0xc0008fe2c0) (0xc000540140) Stream added, broadcasting: 3\nI0216 13:24:11.459539     278 log.go:172] (0xc0008fe2c0) Reply frame received for 3\nI0216 13:24:11.459566     278 log.go:172] (0xc0008fe2c0) (0xc00084a780) Create stream\nI0216 13:24:11.459575     278 log.go:172] (0xc0008fe2c0) (0xc00084a780) Stream added, broadcasting: 5\nI0216 13:24:11.460648     278 log.go:172] (0xc0008fe2c0) Reply frame received for 5\nI0216 13:24:11.589441     278 log.go:172] (0xc0008fe2c0) Data frame received for 5\nI0216 13:24:11.589473     278 log.go:172] (0xc00084a780) (5) Data frame handling\nI0216 13:24:11.589490     278 log.go:172] (0xc00084a780) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0216 13:24:11.594381     278 log.go:172] (0xc0008fe2c0) Data frame received for 3\nI0216 13:24:11.594408     278 log.go:172] (0xc000540140) (3) Data frame handling\nI0216 13:24:11.594434     278 log.go:172] (0xc000540140) (3) Data frame sent\nI0216 13:24:11.594618     278 log.go:172] (0xc0008fe2c0) Data frame received for 5\nI0216 13:24:11.594630     278 log.go:172] (0xc00084a780) (5) Data frame handling\nI0216 13:24:11.594641     278 log.go:172] (0xc00084a780) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0216 13:24:11.672272     278 log.go:172] (0xc0008fe2c0) Data frame received for 1\nI0216 13:24:11.672482     278 log.go:172] (0xc00084a6e0) (1) Data frame handling\nI0216 13:24:11.672538     278 log.go:172] (0xc00084a6e0) (1) Data frame sent\nI0216 13:24:11.673114     278 log.go:172] (0xc0008fe2c0) (0xc00084a6e0) Stream removed, broadcasting: 1\nI0216 13:24:11.673902     278 log.go:172] (0xc0008fe2c0) (0xc000540140) Stream removed, broadcasting: 3\nI0216 13:24:11.674184     278 log.go:172] (0xc0008fe2c0) (0xc00084a780) Stream removed, broadcasting: 5\nI0216 13:24:11.674265     278 log.go:172] (0xc0008fe2c0) (0xc00084a6e0) Stream removed, broadcasting: 1\nI0216 13:24:11.674274     278 log.go:172] (0xc0008fe2c0) (0xc000540140) Stream removed, broadcasting: 3\nI0216 13:24:11.674280     278 log.go:172] (0xc0008fe2c0) (0xc00084a780) Stream removed, broadcasting: 5\n"
Feb 16 13:24:11.682: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 16 13:24:11.682: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 16 13:24:11.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 13:24:12.405: INFO: stderr: "I0216 13:24:11.895074     297 log.go:172] (0xc000a02790) (0xc000902a00) Create stream\nI0216 13:24:11.897358     297 log.go:172] (0xc000a02790) (0xc000902a00) Stream added, broadcasting: 1\nI0216 13:24:11.921343     297 log.go:172] (0xc000a02790) Reply frame received for 1\nI0216 13:24:11.921545     297 log.go:172] (0xc000a02790) (0xc000902000) Create stream\nI0216 13:24:11.921574     297 log.go:172] (0xc000a02790) (0xc000902000) Stream added, broadcasting: 3\nI0216 13:24:11.925150     297 log.go:172] (0xc000a02790) Reply frame received for 3\nI0216 13:24:11.925185     297 log.go:172] (0xc000a02790) (0xc0006a2280) Create stream\nI0216 13:24:11.925201     297 log.go:172] (0xc000a02790) (0xc0006a2280) Stream added, broadcasting: 5\nI0216 13:24:11.929459     297 log.go:172] (0xc000a02790) Reply frame received for 5\nI0216 13:24:12.117910     297 log.go:172] (0xc000a02790) Data frame received for 3\nI0216 13:24:12.117984     297 log.go:172] (0xc000902000) (3) Data frame handling\nI0216 13:24:12.118002     297 log.go:172] (0xc000902000) (3) Data frame sent\nI0216 13:24:12.118027     297 log.go:172] (0xc000a02790) Data frame received for 5\nI0216 13:24:12.118043     297 log.go:172] (0xc0006a2280) (5) Data frame handling\nI0216 13:24:12.118055     297 log.go:172] (0xc0006a2280) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0216 13:24:12.385327     297 log.go:172] (0xc000a02790) Data frame received for 1\nI0216 13:24:12.385689     297 log.go:172] (0xc000a02790) (0xc000902000) Stream removed, broadcasting: 3\nI0216 13:24:12.385756     297 log.go:172] (0xc000902a00) (1) Data frame handling\nI0216 13:24:12.385796     297 log.go:172] (0xc000902a00) (1) Data frame sent\nI0216 13:24:12.385889     297 log.go:172] (0xc000a02790) (0xc0006a2280) Stream removed, broadcasting: 5\nI0216 13:24:12.386096     297 log.go:172] (0xc000a02790) (0xc000902a00) Stream removed, broadcasting: 1\nI0216 13:24:12.386162     297 log.go:172] (0xc000a02790) Go away received\nI0216 13:24:12.387936     297 log.go:172] (0xc000a02790) (0xc000902a00) Stream removed, broadcasting: 1\nI0216 13:24:12.387989     297 log.go:172] (0xc000a02790) (0xc000902000) Stream removed, broadcasting: 3\nI0216 13:24:12.388011     297 log.go:172] (0xc000a02790) (0xc0006a2280) Stream removed, broadcasting: 5\n"
Feb 16 13:24:12.405: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 16 13:24:12.405: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 16 13:24:12.416: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 16 13:24:12.416: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 16 13:24:12.416: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Feb 16 13:24:12.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 16 13:24:12.962: INFO: stderr: "I0216 13:24:12.706988     317 log.go:172] (0xc00012adc0) (0xc00065c820) Create stream\nI0216 13:24:12.707506     317 log.go:172] (0xc00012adc0) (0xc00065c820) Stream added, broadcasting: 1\nI0216 13:24:12.720500     317 log.go:172] (0xc00012adc0) Reply frame received for 1\nI0216 13:24:12.720591     317 log.go:172] (0xc00012adc0) (0xc00063e000) Create stream\nI0216 13:24:12.720644     317 log.go:172] (0xc00012adc0) (0xc00063e000) Stream added, broadcasting: 3\nI0216 13:24:12.723471     317 log.go:172] (0xc00012adc0) Reply frame received for 3\nI0216 13:24:12.723631     317 log.go:172] (0xc00012adc0) (0xc00078a000) Create stream\nI0216 13:24:12.723659     317 log.go:172] (0xc00012adc0) (0xc00078a000) Stream added, broadcasting: 5\nI0216 13:24:12.726158     317 log.go:172] (0xc00012adc0) Reply frame received for 5\nI0216 13:24:12.845516     317 log.go:172] (0xc00012adc0) Data frame received for 3\nI0216 13:24:12.845583     317 log.go:172] (0xc00063e000) (3) Data frame handling\nI0216 13:24:12.845627     317 log.go:172] (0xc00012adc0) Data frame received for 5\nI0216 13:24:12.845678     317 log.go:172] (0xc00078a000) (5) Data frame handling\nI0216 13:24:12.845693     317 log.go:172] (0xc00078a000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0216 13:24:12.845708     317 log.go:172] (0xc00063e000) (3) Data frame sent\nI0216 13:24:12.954196     317 log.go:172] (0xc00012adc0) Data frame received for 1\nI0216 13:24:12.954255     317 log.go:172] (0xc00012adc0) (0xc00063e000) Stream removed, broadcasting: 3\nI0216 13:24:12.954287     317 log.go:172] (0xc00065c820) (1) Data frame handling\nI0216 13:24:12.954304     317 log.go:172] (0xc00065c820) (1) Data frame sent\nI0216 13:24:12.954353     317 log.go:172] (0xc00012adc0) (0xc00078a000) Stream removed, broadcasting: 5\nI0216 13:24:12.954445     317 log.go:172] (0xc00012adc0) (0xc00065c820) Stream removed, broadcasting: 1\nI0216 13:24:12.954504     317 log.go:172] (0xc00012adc0) Go away received\nI0216 13:24:12.955129     317 log.go:172] (0xc00012adc0) (0xc00065c820) Stream removed, broadcasting: 1\nI0216 13:24:12.955140     317 log.go:172] (0xc00012adc0) (0xc00063e000) Stream removed, broadcasting: 3\nI0216 13:24:12.955144     317 log.go:172] (0xc00012adc0) (0xc00078a000) Stream removed, broadcasting: 5\n"
Feb 16 13:24:12.962: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 16 13:24:12.962: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 16 13:24:12.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 16 13:24:13.292: INFO: stderr: "I0216 13:24:13.115956     338 log.go:172] (0xc000820420) (0xc00080a640) Create stream\nI0216 13:24:13.116122     338 log.go:172] (0xc000820420) (0xc00080a640) Stream added, broadcasting: 1\nI0216 13:24:13.118068     338 log.go:172] (0xc000820420) Reply frame received for 1\nI0216 13:24:13.118098     338 log.go:172] (0xc000820420) (0xc00080c000) Create stream\nI0216 13:24:13.118108     338 log.go:172] (0xc000820420) (0xc00080c000) Stream added, broadcasting: 3\nI0216 13:24:13.119254     338 log.go:172] (0xc000820420) Reply frame received for 3\nI0216 13:24:13.119291     338 log.go:172] (0xc000820420) (0xc00080c0a0) Create stream\nI0216 13:24:13.119301     338 log.go:172] (0xc000820420) (0xc00080c0a0) Stream added, broadcasting: 5\nI0216 13:24:13.120869     338 log.go:172] (0xc000820420) Reply frame received for 5\nI0216 13:24:13.184172     338 log.go:172] (0xc000820420) Data frame received for 5\nI0216 13:24:13.184203     338 log.go:172] (0xc00080c0a0) (5) Data frame handling\nI0216 13:24:13.184219     338 log.go:172] (0xc00080c0a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0216 13:24:13.228008     338 log.go:172] (0xc000820420) Data frame received for 3\nI0216 13:24:13.228145     338 log.go:172] (0xc00080c000) (3) Data frame handling\nI0216 13:24:13.228181     338 log.go:172] (0xc00080c000) (3) Data frame sent\nI0216 13:24:13.285199     338 log.go:172] (0xc000820420) (0xc00080c000) Stream removed, broadcasting: 3\nI0216 13:24:13.285283     338 log.go:172] (0xc000820420) Data frame received for 1\nI0216 13:24:13.285303     338 log.go:172] (0xc00080a640) (1) Data frame handling\nI0216 13:24:13.285326     338 log.go:172] (0xc00080a640) (1) Data frame sent\nI0216 13:24:13.285340     338 log.go:172] (0xc000820420) (0xc00080c0a0) Stream removed, broadcasting: 5\nI0216 13:24:13.285377     338 log.go:172] (0xc000820420) (0xc00080a640) Stream removed, broadcasting: 1\nI0216 13:24:13.285417     338 log.go:172] (0xc000820420) Go away received\nI0216 13:24:13.285879     338 log.go:172] (0xc000820420) (0xc00080a640) Stream removed, broadcasting: 1\nI0216 13:24:13.285896     338 log.go:172] (0xc000820420) (0xc00080c000) Stream removed, broadcasting: 3\nI0216 13:24:13.285902     338 log.go:172] (0xc000820420) (0xc00080c0a0) Stream removed, broadcasting: 5\n"
Feb 16 13:24:13.292: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 16 13:24:13.292: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 16 13:24:13.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 16 13:24:14.103: INFO: stderr: "I0216 13:24:13.586935     358 log.go:172] (0xc000128d10) (0xc00060a8c0) Create stream\nI0216 13:24:13.587108     358 log.go:172] (0xc000128d10) (0xc00060a8c0) Stream added, broadcasting: 1\nI0216 13:24:13.597353     358 log.go:172] (0xc000128d10) Reply frame received for 1\nI0216 13:24:13.597608     358 log.go:172] (0xc000128d10) (0xc0008ba000) Create stream\nI0216 13:24:13.597636     358 log.go:172] (0xc000128d10) (0xc0008ba000) Stream added, broadcasting: 3\nI0216 13:24:13.599575     358 log.go:172] (0xc000128d10) Reply frame received for 3\nI0216 13:24:13.599603     358 log.go:172] (0xc000128d10) (0xc00060a960) Create stream\nI0216 13:24:13.599617     358 log.go:172] (0xc000128d10) (0xc00060a960) Stream added, broadcasting: 5\nI0216 13:24:13.601460     358 log.go:172] (0xc000128d10) Reply frame received for 5\nI0216 13:24:13.744168     358 log.go:172] (0xc000128d10) Data frame received for 5\nI0216 13:24:13.744314     358 log.go:172] (0xc00060a960) (5) Data frame handling\nI0216 13:24:13.744352     358 log.go:172] (0xc00060a960) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0216 13:24:13.788785     358 log.go:172] (0xc000128d10) Data frame received for 3\nI0216 13:24:13.788941     358 log.go:172] (0xc0008ba000) (3) Data frame handling\nI0216 13:24:13.788987     358 log.go:172] (0xc0008ba000) (3) Data frame sent\nI0216 13:24:14.086575     358 log.go:172] (0xc000128d10) Data frame received for 1\nI0216 13:24:14.086718     358 log.go:172] (0xc00060a8c0) (1) Data frame handling\nI0216 13:24:14.086802     358 log.go:172] (0xc00060a8c0) (1) Data frame sent\nI0216 13:24:14.088206     358 log.go:172] (0xc000128d10) (0xc00060a8c0) Stream removed, broadcasting: 1\nI0216 13:24:14.089993     358 log.go:172] (0xc000128d10) (0xc0008ba000) Stream removed, broadcasting: 3\nI0216 13:24:14.090364     358 log.go:172] (0xc000128d10) (0xc00060a960) Stream removed, broadcasting: 5\nI0216 13:24:14.090482     358 log.go:172] (0xc000128d10) (0xc00060a8c0) Stream removed, broadcasting: 1\nI0216 13:24:14.090502     358 log.go:172] (0xc000128d10) (0xc0008ba000) Stream removed, broadcasting: 3\nI0216 13:24:14.090528     358 log.go:172] (0xc000128d10) (0xc00060a960) Stream removed, broadcasting: 5\nI0216 13:24:14.090675     358 log.go:172] (0xc000128d10) Go away received\n"
Feb 16 13:24:14.103: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 16 13:24:14.103: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 16 13:24:14.103: INFO: Waiting for statefulset status.replicas updated to 0
Feb 16 13:24:14.119: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Feb 16 13:24:24.128: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 16 13:24:24.128: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb 16 13:24:24.128: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 16 13:24:24.149: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 16 13:24:24.149: INFO: ss-0  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:39 +0000 UTC  }]
Feb 16 13:24:24.149: INFO: ss-1  iruya-server-sfge57q7djm7  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:59 +0000 UTC  }]
Feb 16 13:24:24.149: INFO: ss-2  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:59 +0000 UTC  }]
Feb 16 13:24:24.149: INFO: 
Feb 16 13:24:24.149: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 16 13:24:25.867: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 16 13:24:25.867: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:39 +0000 UTC  }]
Feb 16 13:24:25.867: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:59 +0000 UTC  }]
Feb 16 13:24:25.867: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:59 +0000 UTC  }]
Feb 16 13:24:25.867: INFO: 
Feb 16 13:24:25.867: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 16 13:24:26.896: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 16 13:24:26.896: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:39 +0000 UTC  }]
Feb 16 13:24:26.896: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:59 +0000 UTC  }]
Feb 16 13:24:26.897: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:59 +0000 UTC  }]
Feb 16 13:24:26.897: INFO: 
Feb 16 13:24:26.897: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 16 13:24:27.906: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 16 13:24:27.906: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:39 +0000 UTC  }]
Feb 16 13:24:27.906: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:59 +0000 UTC  }]
Feb 16 13:24:27.906: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:59 +0000 UTC  }]
Feb 16 13:24:27.906: INFO: 
Feb 16 13:24:27.906: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 16 13:24:28.914: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 16 13:24:28.914: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:39 +0000 UTC  }]
Feb 16 13:24:28.915: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:59 +0000 UTC  }]
Feb 16 13:24:28.915: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:59 +0000 UTC  }]
Feb 16 13:24:28.915: INFO: 
Feb 16 13:24:28.915: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 16 13:24:29.929: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 16 13:24:29.929: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:39 +0000 UTC  }]
Feb 16 13:24:29.929: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:59 +0000 UTC  }]
Feb 16 13:24:29.929: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:59 +0000 UTC  }]
Feb 16 13:24:29.929: INFO: 
Feb 16 13:24:29.929: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 16 13:24:30.941: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb 16 13:24:30.941: INFO: ss-0  iruya-node  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:39 +0000 UTC  }]
Feb 16 13:24:30.941: INFO: ss-2  iruya-node  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:59 +0000 UTC  }]
Feb 16 13:24:30.941: INFO: 
Feb 16 13:24:30.941: INFO: StatefulSet ss has not reached scale 0, at 2
Feb 16 13:24:32.013: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb 16 13:24:32.013: INFO: ss-0  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:39 +0000 UTC  }]
Feb 16 13:24:32.013: INFO: ss-2  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:59 +0000 UTC  }]
Feb 16 13:24:32.013: INFO: 
Feb 16 13:24:32.013: INFO: StatefulSet ss has not reached scale 0, at 2
Feb 16 13:24:33.025: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb 16 13:24:33.025: INFO: ss-0  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:39 +0000 UTC  }]
Feb 16 13:24:33.025: INFO: ss-2  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:59 +0000 UTC  }]
Feb 16 13:24:33.025: INFO: 
Feb 16 13:24:33.025: INFO: StatefulSet ss has not reached scale 0, at 2
Feb 16 13:24:34.036: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb 16 13:24:34.036: INFO: ss-0  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:39 +0000 UTC  }]
Feb 16 13:24:34.036: INFO: ss-2  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:24:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:23:59 +0000 UTC  }]
Feb 16 13:24:34.036: INFO: 
Feb 16 13:24:34.036: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-253
Feb 16 13:24:35.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 13:24:35.311: INFO: rc: 1
Feb 16 13:24:35.311: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc002be7260 exit status 1   true [0xc0005741f8 0xc000574248 0xc0005742b0] [0xc0005741f8 0xc000574248 0xc0005742b0] [0xc000574228 0xc000574278] [0xba6c50 0xba6c50] 0xc002a08e40 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Feb 16 13:24:45.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 13:24:45.521: INFO: rc: 1
Feb 16 13:24:45.522: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0004e7c80 exit status 1   true [0xc002a8a2b8 0xc002a8a308 0xc002a8a330] [0xc002a8a2b8 0xc002a8a308 0xc002a8a330] [0xc002a8a2f0 0xc002a8a328] [0xba6c50 0xba6c50] 0xc001806e40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 16 13:24:55.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 13:24:55.744: INFO: rc: 1
Feb 16 13:24:55.745: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00057baa0 exit status 1   true [0xc0013b4698 0xc0013b4748 0xc0013b4878] [0xc0013b4698 0xc0013b4748 0xc0013b4878] [0xc0013b4738 0xc0013b4818] [0xba6c50 0xba6c50] 0xc0024e3020 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 16 13:25:05.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 13:25:05.941: INFO: rc: 1
Feb 16 13:25:05.941: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0004e7d70 exit status 1   true [0xc002a8a338 0xc002a8a390 0xc002a8a3b8] [0xc002a8a338 0xc002a8a390 0xc002a8a3b8] [0xc002a8a378 0xc002a8a3b0] [0xba6c50 0xba6c50] 0xc001807140 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 16 13:25:15.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 13:25:16.149: INFO: rc: 1
Feb 16 13:25:16.150: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001a5e030 exit status 1   true [0xc002c84208 0xc002c84240 0xc002c84298] [0xc002c84208 0xc002c84240 0xc002c84298] [0xc002c84230 0xc002c84280] [0xba6c50 0xba6c50] 0xc0024f12c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 16 13:25:26.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 13:25:26.350: INFO: rc: 1
Feb 16 13:25:26.350: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001a5e0f0 exit status 1   true [0xc002c842a0 0xc002c842b8 0xc002c842d0] [0xc002c842a0 0xc002c842b8 0xc002c842d0] [0xc002c842b0 0xc002c842c8] [0xba6c50 0xba6c50] 0xc0024f15c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 16 13:25:36.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 13:25:36.540: INFO: rc: 1
Feb 16 13:25:36.541: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0004e7e60 exit status 1   true [0xc002a8a3c0 0xc002a8a400 0xc002a8a430] [0xc002a8a3c0 0xc002a8a400 0xc002a8a430] [0xc002a8a3f8 0xc002a8a410] [0xba6c50 0xba6c50] 0xc001807440 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 16 13:25:46.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 13:25:46.686: INFO: rc: 1
Feb 16 13:25:46.686: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0004e7f50 exit status 1   true [0xc002a8a448 0xc002a8a460 0xc002a8a4a0] [0xc002a8a448 0xc002a8a460 0xc002a8a4a0] [0xc002a8a458 0xc002a8a498] [0xba6c50 0xba6c50] 0xc001807740 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 16 13:25:56.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 13:25:56.906: INFO: rc: 1
Feb 16 13:25:56.907: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002a48060 exit status 1   true [0xc002a8a4a8 0xc002a8a4e8 0xc002a8a528] [0xc002a8a4a8 0xc002a8a4e8 0xc002a8a528] [0xc002a8a4d0 0xc002a8a508] [0xba6c50 0xba6c50] 0xc001807da0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 16 13:26:06.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 13:26:09.675: INFO: rc: 1
Feb 16 13:26:09.675: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002be7380 exit status 1   true [0xc0005742f0 0xc000574330 0xc0005743b8] [0xc0005742f0 0xc000574330 0xc0005743b8] [0xc000574320 0xc000574378] [0xba6c50 0xba6c50] 0xc002a09140 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 16 13:26:19.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 13:26:19.844: INFO: rc: 1
Feb 16 13:26:19.844: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0004e6b10 exit status 1   true [0xc000186000 0xc002a8a028 0xc002a8a058] [0xc000186000 0xc002a8a028 0xc002a8a058] [0xc002a8a020 0xc002a8a050] [0xba6c50 0xba6c50] 0xc002570720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 16 13:26:29.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 13:26:30.020: INFO: rc: 1
Feb 16 13:26:30.020: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001558090 exit status 1   true [0xc002c84008 0xc002c84048 0xc002c84078] [0xc002c84008 0xc002c84048 0xc002c84078] [0xc002c84040 0xc002c84058] [0xba6c50 0xba6c50] 0xc001806420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 16 13:26:40.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 13:26:40.132: INFO: rc: 1
Feb 16 13:26:40.132: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0029dc0c0 exit status 1   true [0xc000574010 0xc000574050 0xc000574118] [0xc000574010 0xc000574050 0xc000574118] [0xc000574040 0xc000574100] [0xba6c50 0xba6c50] 0xc0024f0240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 16 13:26:50.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 13:26:50.304: INFO: rc: 1
Feb 16 13:26:50.304: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0029dc1b0 exit status 1   true [0xc000574120 0xc0005741b8 0xc0005741e8] [0xc000574120 0xc0005741b8 0xc0005741e8] [0xc000574188 0xc0005741e0] [0xba6c50 0xba6c50] 0xc0024f05a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 16 13:27:00.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 13:27:00.528: INFO: rc: 1
Feb 16 13:27:00.528: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002e1a0f0 exit status 1   true [0xc0013b4018 0xc0013b4110 0xc0013b41b8] [0xc0013b4018 0xc0013b4110 0xc0013b41b8] [0xc0013b40d0 0xc0013b4130] [0xba6c50 0xba6c50] 0xc002a08300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 16 13:27:10.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 13:27:10.658: INFO: rc: 1
Feb 16 13:27:10.658: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001558150 exit status 1   true [0xc002c84090 0xc002c840d0 0xc002c840f8] [0xc002c84090 0xc002c840d0 0xc002c840f8] [0xc002c840b0 0xc002c840f0] [0xba6c50 0xba6c50] 0xc001806a20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 16 13:27:20.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 13:27:20.839: INFO: rc: 1
Feb 16 13:27:20.840: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0029dc2d0 exit status 1   true [0xc0005741f8 0xc000574248 0xc0005742b0] [0xc0005741f8 0xc000574248 0xc0005742b0] [0xc000574228 0xc000574278] [0xba6c50 0xba6c50] 0xc0024f0ae0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 16 13:27:30.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 13:27:30.951: INFO: rc: 1
Feb 16 13:27:30.951: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0029dc3c0 exit status 1   true [0xc0005742f0 0xc000574330 0xc0005743b8] [0xc0005742f0 0xc000574330 0xc0005743b8] [0xc000574320 0xc000574378] [0xba6c50 0xba6c50] 0xc0024f0de0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 16 13:27:40.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 13:27:41.093: INFO: rc: 1
Feb 16 13:27:41.093: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0004e6bd0 exit status 1   true [0xc002a8a060 0xc002a8a078 0xc002a8a0b8] [0xc002a8a060 0xc002a8a078 0xc002a8a0b8] [0xc002a8a070 0xc002a8a0b0] [0xba6c50 0xba6c50] 0xc002570d20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 16 13:27:51.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 13:27:51.244: INFO: rc: 1
Feb 16 13:27:51.244: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0004e6c90 exit status 1   true [0xc002a8a0d0 0xc002a8a0e8 0xc002a8a100] [0xc002a8a0d0 0xc002a8a0e8 0xc002a8a100] [0xc002a8a0e0 0xc002a8a0f8] [0xba6c50 0xba6c50] 0xc002571380 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 16 13:28:01.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 13:28:01.418: INFO: rc: 1
Feb 16 13:28:01.418: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001558240 exit status 1   true [0xc002c84100 0xc002c84128 0xc002c84180] [0xc002c84100 0xc002c84128 0xc002c84180] [0xc002c84110 0xc002c84168] [0xba6c50 0xba6c50] 0xc001806fc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 16 13:28:11.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 13:28:11.584: INFO: rc: 1
Feb 16 13:28:11.585: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0029dc4b0 exit status 1   true [0xc0005743f0 0xc000574468 0xc0005744e8] [0xc0005743f0 0xc000574468 0xc0005744e8] [0xc000574430 0xc0005744c8] [0xba6c50 0xba6c50] 0xc0024f10e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 16 13:28:21.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 13:28:22.009: INFO: rc: 1
Feb 16 13:28:22.009: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002e1a090 exit status 1   true [0xc000520038 0xc0013b40d0 0xc0013b4130] [0xc000520038 0xc0013b40d0 0xc0013b4130] [0xc0013b4080 0xc0013b4120] [0xba6c50 0xba6c50] 0xc002a08300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 16 13:28:32.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 13:28:32.153: INFO: rc: 1
Feb 16 13:28:32.153: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0004e6b70 exit status 1   true [0xc002a8a008 0xc002a8a048 0xc002a8a060] [0xc002a8a008 0xc002a8a048 0xc002a8a060] [0xc002a8a028 0xc002a8a058] [0xba6c50 0xba6c50] 0xc002570720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 16 13:28:42.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 13:28:42.309: INFO: rc: 1
Feb 16 13:28:42.309: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0004e6c60 exit status 1   true [0xc002a8a068 0xc002a8a090 0xc002a8a0d0] [0xc002a8a068 0xc002a8a090 0xc002a8a0d0] [0xc002a8a078 0xc002a8a0b8] [0xba6c50 0xba6c50] 0xc002570d20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 16 13:28:52.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 13:28:52.450: INFO: rc: 1
Feb 16 13:28:52.451: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0004e6d80 exit status 1   true [0xc002a8a0d8 0xc002a8a0f0 0xc002a8a108] [0xc002a8a0d8 0xc002a8a0f0 0xc002a8a108] [0xc002a8a0e8 0xc002a8a100] [0xba6c50 0xba6c50] 0xc002571380 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 16 13:29:02.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 13:29:02.625: INFO: rc: 1
Feb 16 13:29:02.625: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001558120 exit status 1   true [0xc002c84008 0xc002c84048 0xc002c84078] [0xc002c84008 0xc002c84048 0xc002c84078] [0xc002c84040 0xc002c84058] [0xba6c50 0xba6c50] 0xc001806420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 16 13:29:12.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 13:29:12.754: INFO: rc: 1
Feb 16 13:29:12.754: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0004e6e40 exit status 1   true [0xc002a8a110 0xc002a8a128 0xc002a8a140] [0xc002a8a110 0xc002a8a128 0xc002a8a140] [0xc002a8a120 0xc002a8a138] [0xba6c50 0xba6c50] 0xc002571860 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 16 13:29:22.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 13:29:22.944: INFO: rc: 1
Feb 16 13:29:22.945: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0029dc120 exit status 1   true [0xc000574010 0xc000574050 0xc000574118] [0xc000574010 0xc000574050 0xc000574118] [0xc000574040 0xc000574100] [0xba6c50 0xba6c50] 0xc0024f0240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 16 13:29:32.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 13:29:33.073: INFO: rc: 1
Feb 16 13:29:33.073: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001558270 exit status 1   true [0xc002c84090 0xc002c840d0 0xc002c840f8] [0xc002c84090 0xc002c840d0 0xc002c840f8] [0xc002c840b0 0xc002c840f0] [0xba6c50 0xba6c50] 0xc001806a20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 16 13:29:43.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-253 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 13:29:43.436: INFO: rc: 1
Feb 16 13:29:43.437: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Feb 16 13:29:43.437: INFO: Scaling statefulset ss to 0
Feb 16 13:29:43.457: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb 16 13:29:43.462: INFO: Deleting all statefulset in ns statefulset-253
Feb 16 13:29:43.471: INFO: Scaling statefulset ss to 0
Feb 16 13:29:43.490: INFO: Waiting for statefulset status.replicas updated to 0
Feb 16 13:29:43.493: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:29:43.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-253" for this suite.
Feb 16 13:29:49.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:29:49.774: INFO: namespace statefulset-253 deletion completed in 6.206070117s

• [SLOW TEST:370.985 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:29:49.774: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:29:58.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4311" for this suite.
Feb 16 13:30:04.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:30:04.193: INFO: namespace kubelet-test-4311 deletion completed in 6.168943801s

• [SLOW TEST:14.419 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:30:04.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-d3037a00-58e0-4f9e-902a-4b8676aea4f9
STEP: Creating a pod to test consume configMaps
Feb 16 13:30:04.321: INFO: Waiting up to 5m0s for pod "pod-configmaps-70ac01d3-87aa-4dc7-a3d5-191618bbf5aa" in namespace "configmap-9655" to be "success or failure"
Feb 16 13:30:04.367: INFO: Pod "pod-configmaps-70ac01d3-87aa-4dc7-a3d5-191618bbf5aa": Phase="Pending", Reason="", readiness=false. Elapsed: 45.836358ms
Feb 16 13:30:06.380: INFO: Pod "pod-configmaps-70ac01d3-87aa-4dc7-a3d5-191618bbf5aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059405595s
Feb 16 13:30:08.388: INFO: Pod "pod-configmaps-70ac01d3-87aa-4dc7-a3d5-191618bbf5aa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066721587s
Feb 16 13:30:10.397: INFO: Pod "pod-configmaps-70ac01d3-87aa-4dc7-a3d5-191618bbf5aa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075911688s
Feb 16 13:30:12.406: INFO: Pod "pod-configmaps-70ac01d3-87aa-4dc7-a3d5-191618bbf5aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.085563923s
STEP: Saw pod success
Feb 16 13:30:12.407: INFO: Pod "pod-configmaps-70ac01d3-87aa-4dc7-a3d5-191618bbf5aa" satisfied condition "success or failure"
Feb 16 13:30:12.410: INFO: Trying to get logs from node iruya-node pod pod-configmaps-70ac01d3-87aa-4dc7-a3d5-191618bbf5aa container configmap-volume-test: 
STEP: delete the pod
Feb 16 13:30:12.490: INFO: Waiting for pod pod-configmaps-70ac01d3-87aa-4dc7-a3d5-191618bbf5aa to disappear
Feb 16 13:30:12.500: INFO: Pod pod-configmaps-70ac01d3-87aa-4dc7-a3d5-191618bbf5aa no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:30:12.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9655" for this suite.
Feb 16 13:30:18.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:30:18.653: INFO: namespace configmap-9655 deletion completed in 6.146574414s

• [SLOW TEST:14.460 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:30:18.654: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Feb 16 13:30:18.736: INFO: Waiting up to 5m0s for pod "client-containers-d182c6a8-16df-40ee-a3b8-c9a0207e1eae" in namespace "containers-6353" to be "success or failure"
Feb 16 13:30:18.751: INFO: Pod "client-containers-d182c6a8-16df-40ee-a3b8-c9a0207e1eae": Phase="Pending", Reason="", readiness=false. Elapsed: 14.301546ms
Feb 16 13:30:20.768: INFO: Pod "client-containers-d182c6a8-16df-40ee-a3b8-c9a0207e1eae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032012041s
Feb 16 13:30:22.784: INFO: Pod "client-containers-d182c6a8-16df-40ee-a3b8-c9a0207e1eae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047789106s
Feb 16 13:30:24.793: INFO: Pod "client-containers-d182c6a8-16df-40ee-a3b8-c9a0207e1eae": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056191569s
Feb 16 13:30:26.806: INFO: Pod "client-containers-d182c6a8-16df-40ee-a3b8-c9a0207e1eae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.069487215s
STEP: Saw pod success
Feb 16 13:30:26.806: INFO: Pod "client-containers-d182c6a8-16df-40ee-a3b8-c9a0207e1eae" satisfied condition "success or failure"
Feb 16 13:30:26.810: INFO: Trying to get logs from node iruya-node pod client-containers-d182c6a8-16df-40ee-a3b8-c9a0207e1eae container test-container: 
STEP: delete the pod
Feb 16 13:30:26.901: INFO: Waiting for pod client-containers-d182c6a8-16df-40ee-a3b8-c9a0207e1eae to disappear
Feb 16 13:30:26.907: INFO: Pod client-containers-d182c6a8-16df-40ee-a3b8-c9a0207e1eae no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:30:26.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6353" for this suite.
Feb 16 13:30:32.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:30:33.142: INFO: namespace containers-6353 deletion completed in 6.226788412s

• [SLOW TEST:14.488 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:30:33.142: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 16 13:30:33.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-1889'
Feb 16 13:30:33.390: INFO: stderr: ""
Feb 16 13:30:33.390: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Feb 16 13:30:43.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-1889 -o json'
Feb 16 13:30:43.601: INFO: stderr: ""
Feb 16 13:30:43.601: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-02-16T13:30:33Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-1889\",\n        \"resourceVersion\": \"24574678\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-1889/pods/e2e-test-nginx-pod\",\n        \"uid\": \"6ad97975-3e2a-4d6c-8b9b-132a81b1260f\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-t8dhq\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-t8dhq\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-t8dhq\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-16T13:30:33Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-16T13:30:40Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-16T13:30:40Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-16T13:30:33Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://d191deef2a83d5e89ec8617e1086cf7cde219dcbce2040c76da1a16f69a15ec1\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-02-16T13:30:39Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.3.65\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-02-16T13:30:33Z\"\n    }\n}\n"
STEP: replace the image in the pod
Feb 16 13:30:43.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-1889'
Feb 16 13:30:43.958: INFO: stderr: ""
Feb 16 13:30:43.958: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Feb 16 13:30:44.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-1889'
Feb 16 13:30:50.076: INFO: stderr: ""
Feb 16 13:30:50.076: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:30:50.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1889" for this suite.
Feb 16 13:30:56.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:30:56.212: INFO: namespace kubectl-1889 deletion completed in 6.12862473s

• [SLOW TEST:23.071 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:30:56.213: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 16 13:30:56.533: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"af53826a-eeca-4d19-a9c0-95094c0c154c", Controller:(*bool)(0xc0025f7b62), BlockOwnerDeletion:(*bool)(0xc0025f7b63)}}
Feb 16 13:30:56.581: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"820139f6-710e-433c-a2d7-13cfd455b8dc", Controller:(*bool)(0xc0014b7bea), BlockOwnerDeletion:(*bool)(0xc0014b7beb)}}
Feb 16 13:30:56.598: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"34b85248-2acc-4854-910e-d17cc2feba28", Controller:(*bool)(0xc0014b7dfa), BlockOwnerDeletion:(*bool)(0xc0014b7dfb)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:31:01.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7863" for this suite.
Feb 16 13:31:07.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:31:07.828: INFO: namespace gc-7863 deletion completed in 6.200721307s

• [SLOW TEST:11.615 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:31:07.828: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 16 13:31:07.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-1629'
Feb 16 13:31:08.110: INFO: stderr: ""
Feb 16 13:31:08.110: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Feb 16 13:31:08.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-1629'
Feb 16 13:31:11.752: INFO: stderr: ""
Feb 16 13:31:11.752: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:31:11.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1629" for this suite.
Feb 16 13:31:17.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:31:17.972: INFO: namespace kubectl-1629 deletion completed in 6.21517248s

• [SLOW TEST:10.144 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:31:17.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-9501794e-68ad-4913-8680-dafc3a119708
STEP: Creating a pod to test consume configMaps
Feb 16 13:31:18.112: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-15321a12-3829-4d0a-bbe7-f644d5e9f28d" in namespace "projected-6571" to be "success or failure"
Feb 16 13:31:18.122: INFO: Pod "pod-projected-configmaps-15321a12-3829-4d0a-bbe7-f644d5e9f28d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.253098ms
Feb 16 13:31:20.131: INFO: Pod "pod-projected-configmaps-15321a12-3829-4d0a-bbe7-f644d5e9f28d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019413783s
Feb 16 13:31:22.139: INFO: Pod "pod-projected-configmaps-15321a12-3829-4d0a-bbe7-f644d5e9f28d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027342642s
Feb 16 13:31:24.151: INFO: Pod "pod-projected-configmaps-15321a12-3829-4d0a-bbe7-f644d5e9f28d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039327459s
Feb 16 13:31:26.157: INFO: Pod "pod-projected-configmaps-15321a12-3829-4d0a-bbe7-f644d5e9f28d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.044621652s
STEP: Saw pod success
Feb 16 13:31:26.157: INFO: Pod "pod-projected-configmaps-15321a12-3829-4d0a-bbe7-f644d5e9f28d" satisfied condition "success or failure"
Feb 16 13:31:26.159: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-15321a12-3829-4d0a-bbe7-f644d5e9f28d container projected-configmap-volume-test: 
STEP: delete the pod
Feb 16 13:31:26.204: INFO: Waiting for pod pod-projected-configmaps-15321a12-3829-4d0a-bbe7-f644d5e9f28d to disappear
Feb 16 13:31:26.245: INFO: Pod pod-projected-configmaps-15321a12-3829-4d0a-bbe7-f644d5e9f28d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:31:26.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6571" for this suite.
Feb 16 13:31:32.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:31:32.390: INFO: namespace projected-6571 deletion completed in 6.14146107s

• [SLOW TEST:14.416 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:31:32.391: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 16 13:31:32.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-2447'
Feb 16 13:31:32.681: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 16 13:31:32.681: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Feb 16 13:31:34.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-2447'
Feb 16 13:31:35.719: INFO: stderr: ""
Feb 16 13:31:35.719: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:31:35.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2447" for this suite.
Feb 16 13:31:57.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:31:57.924: INFO: namespace kubectl-2447 deletion completed in 22.199604927s

• [SLOW TEST:25.533 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:31:57.925: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb 16 13:34:58.893: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:34:58.940: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:35:00.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:35:00.950: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:35:02.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:35:03.733: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:35:04.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:35:04.951: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:35:06.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:35:06.949: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:35:08.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:35:08.971: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:35:10.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:35:12.061: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:35:12.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:35:12.950: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:35:14.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:35:14.951: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:35:16.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:35:16.953: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:35:18.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:35:18.947: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:35:20.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:35:20.948: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:35:22.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:35:22.949: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:35:24.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:35:24.950: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:35:26.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:35:26.958: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:35:28.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:35:28.950: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:35:30.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:35:30.953: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:35:32.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:35:32.955: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:35:34.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:35:34.956: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:35:36.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:35:36.951: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:35:38.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:35:38.950: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:35:40.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:35:40.955: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:35:42.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:35:42.952: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:35:44.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:35:44.954: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:35:46.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:35:46.948: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:35:48.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:35:48.952: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:35:50.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:35:50.950: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:35:52.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:35:52.955: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:35:54.942: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:35:54.960: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:35:56.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:35:56.950: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:35:58.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:35:58.953: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:36:00.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:36:00.955: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:36:02.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:36:02.950: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:36:04.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:36:04.951: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:36:06.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:36:06.953: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:36:08.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:36:08.953: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:36:10.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:36:10.950: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:36:12.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:36:12.951: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:36:14.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:36:14.949: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:36:16.942: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:36:16.955: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:36:18.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:36:18.951: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:36:20.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:36:20.954: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:36:22.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:36:22.952: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:36:24.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:36:24.958: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:36:26.942: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:36:26.956: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:36:28.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:36:28.959: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:36:30.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:36:30.949: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:36:32.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:36:32.949: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:36:34.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:36:34.958: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:36:36.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:36:36.948: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:36:38.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:36:38.950: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:36:40.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:36:40.954: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:36:42.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:36:42.952: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:36:44.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:36:44.953: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 13:36:46.941: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 13:36:46.948: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:36:46.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8791" for this suite.
Feb 16 13:37:09.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:37:09.123: INFO: namespace container-lifecycle-hook-8791 deletion completed in 22.168247397s

• [SLOW TEST:311.198 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:37:09.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Feb 16 13:37:09.189: INFO: Pod name pod-release: Found 0 pods out of 1
Feb 16 13:37:14.203: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:37:15.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-9657" for this suite.
Feb 16 13:37:21.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:37:21.372: INFO: namespace replication-controller-9657 deletion completed in 6.110245012s

• [SLOW TEST:12.249 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:37:21.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb 16 13:37:21.531: INFO: PodSpec: initContainers in spec.initContainers
Feb 16 13:38:22.920: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-7bde684a-d5ba-4b80-b4c2-0aef81f4338c", GenerateName:"", Namespace:"init-container-1117", SelfLink:"/api/v1/namespaces/init-container-1117/pods/pod-init-7bde684a-d5ba-4b80-b4c2-0aef81f4338c", UID:"3867fb88-d765-4287-8d48-cf38a5141a6f", ResourceVersion:"24575542", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717457041, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"531150230"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-tvcj7", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001fbc000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-tvcj7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-tvcj7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-tvcj7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0021fc088), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002e7e000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0021fc110)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0021fc130)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0021fc138), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0021fc13c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717457042, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717457042, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717457042, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717457041, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc002778060), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002476070)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0024760e0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://c22a0c3e02f54cdfc7e3c8f4808f83e4695bdbfe0c696fde295a56e6e40763bf"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0027780a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002778080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:38:22.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1117" for this suite.
Feb 16 13:38:45.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:38:45.147: INFO: namespace init-container-1117 deletion completed in 22.183324773s

• [SLOW TEST:83.775 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:38:45.148: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-28c7fdb0-4b0b-4dcf-a800-9391ff1b5d27
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:38:45.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6805" for this suite.
Feb 16 13:38:51.364: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:38:51.563: INFO: namespace configmap-6805 deletion completed in 6.297766824s

• [SLOW TEST:6.415 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:38:51.563: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb 16 13:38:51.800: INFO: Waiting up to 5m0s for pod "pod-cac7cd42-1176-452b-b0f0-8c6338830084" in namespace "emptydir-4972" to be "success or failure"
Feb 16 13:38:51.823: INFO: Pod "pod-cac7cd42-1176-452b-b0f0-8c6338830084": Phase="Pending", Reason="", readiness=false. Elapsed: 23.010043ms
Feb 16 13:38:53.834: INFO: Pod "pod-cac7cd42-1176-452b-b0f0-8c6338830084": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033992655s
Feb 16 13:38:55.843: INFO: Pod "pod-cac7cd42-1176-452b-b0f0-8c6338830084": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042697154s
Feb 16 13:38:57.852: INFO: Pod "pod-cac7cd42-1176-452b-b0f0-8c6338830084": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052412627s
Feb 16 13:38:59.867: INFO: Pod "pod-cac7cd42-1176-452b-b0f0-8c6338830084": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.066883316s
STEP: Saw pod success
Feb 16 13:38:59.867: INFO: Pod "pod-cac7cd42-1176-452b-b0f0-8c6338830084" satisfied condition "success or failure"
Feb 16 13:38:59.883: INFO: Trying to get logs from node iruya-node pod pod-cac7cd42-1176-452b-b0f0-8c6338830084 container test-container: 
STEP: delete the pod
Feb 16 13:39:00.080: INFO: Waiting for pod pod-cac7cd42-1176-452b-b0f0-8c6338830084 to disappear
Feb 16 13:39:00.096: INFO: Pod pod-cac7cd42-1176-452b-b0f0-8c6338830084 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:39:00.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4972" for this suite.
Feb 16 13:39:06.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:39:06.299: INFO: namespace emptydir-4972 deletion completed in 6.19217259s

• [SLOW TEST:14.736 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:39:06.300: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 16 13:39:06.436: INFO: Waiting up to 5m0s for pod "downwardapi-volume-03c4b194-ebcb-46a6-bac0-8dad09610fe9" in namespace "downward-api-2289" to be "success or failure"
Feb 16 13:39:06.457: INFO: Pod "downwardapi-volume-03c4b194-ebcb-46a6-bac0-8dad09610fe9": Phase="Pending", Reason="", readiness=false. Elapsed: 20.276008ms
Feb 16 13:39:08.469: INFO: Pod "downwardapi-volume-03c4b194-ebcb-46a6-bac0-8dad09610fe9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032479557s
Feb 16 13:39:10.484: INFO: Pod "downwardapi-volume-03c4b194-ebcb-46a6-bac0-8dad09610fe9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047113982s
Feb 16 13:39:12.498: INFO: Pod "downwardapi-volume-03c4b194-ebcb-46a6-bac0-8dad09610fe9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061269062s
Feb 16 13:39:14.514: INFO: Pod "downwardapi-volume-03c4b194-ebcb-46a6-bac0-8dad09610fe9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.077138755s
STEP: Saw pod success
Feb 16 13:39:14.514: INFO: Pod "downwardapi-volume-03c4b194-ebcb-46a6-bac0-8dad09610fe9" satisfied condition "success or failure"
Feb 16 13:39:14.520: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-03c4b194-ebcb-46a6-bac0-8dad09610fe9 container client-container: 
STEP: delete the pod
Feb 16 13:39:14.606: INFO: Waiting for pod downwardapi-volume-03c4b194-ebcb-46a6-bac0-8dad09610fe9 to disappear
Feb 16 13:39:14.673: INFO: Pod downwardapi-volume-03c4b194-ebcb-46a6-bac0-8dad09610fe9 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:39:14.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2289" for this suite.
Feb 16 13:39:20.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:39:20.875: INFO: namespace downward-api-2289 deletion completed in 6.193765581s

• [SLOW TEST:14.576 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:39:20.876: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-caa9dc15-d725-4d37-82b4-292efa40c240
STEP: Creating a pod to test consume secrets
Feb 16 13:39:21.056: INFO: Waiting up to 5m0s for pod "pod-secrets-a907e128-63ac-4a8c-ad74-9e513feecd31" in namespace "secrets-9626" to be "success or failure"
Feb 16 13:39:21.116: INFO: Pod "pod-secrets-a907e128-63ac-4a8c-ad74-9e513feecd31": Phase="Pending", Reason="", readiness=false. Elapsed: 59.950822ms
Feb 16 13:39:23.125: INFO: Pod "pod-secrets-a907e128-63ac-4a8c-ad74-9e513feecd31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068611988s
Feb 16 13:39:25.137: INFO: Pod "pod-secrets-a907e128-63ac-4a8c-ad74-9e513feecd31": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081006742s
Feb 16 13:39:27.148: INFO: Pod "pod-secrets-a907e128-63ac-4a8c-ad74-9e513feecd31": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091768001s
Feb 16 13:39:29.157: INFO: Pod "pod-secrets-a907e128-63ac-4a8c-ad74-9e513feecd31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.100925665s
STEP: Saw pod success
Feb 16 13:39:29.157: INFO: Pod "pod-secrets-a907e128-63ac-4a8c-ad74-9e513feecd31" satisfied condition "success or failure"
Feb 16 13:39:29.161: INFO: Trying to get logs from node iruya-node pod pod-secrets-a907e128-63ac-4a8c-ad74-9e513feecd31 container secret-volume-test: 
STEP: delete the pod
Feb 16 13:39:29.226: INFO: Waiting for pod pod-secrets-a907e128-63ac-4a8c-ad74-9e513feecd31 to disappear
Feb 16 13:39:29.279: INFO: Pod pod-secrets-a907e128-63ac-4a8c-ad74-9e513feecd31 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:39:29.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9626" for this suite.
Feb 16 13:39:35.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:39:35.436: INFO: namespace secrets-9626 deletion completed in 6.145257764s

• [SLOW TEST:14.561 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:39:35.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Feb 16 13:39:35.558: INFO: namespace kubectl-531
Feb 16 13:39:35.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-531'
Feb 16 13:39:37.853: INFO: stderr: ""
Feb 16 13:39:37.853: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb 16 13:39:38.865: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 13:39:38.865: INFO: Found 0 / 1
Feb 16 13:39:39.866: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 13:39:39.866: INFO: Found 0 / 1
Feb 16 13:39:40.864: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 13:39:40.864: INFO: Found 0 / 1
Feb 16 13:39:41.863: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 13:39:41.863: INFO: Found 0 / 1
Feb 16 13:39:42.874: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 13:39:42.874: INFO: Found 0 / 1
Feb 16 13:39:43.870: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 13:39:43.870: INFO: Found 0 / 1
Feb 16 13:39:44.880: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 13:39:44.880: INFO: Found 0 / 1
Feb 16 13:39:45.876: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 13:39:45.876: INFO: Found 1 / 1
Feb 16 13:39:45.876: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 16 13:39:45.892: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 13:39:45.892: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 16 13:39:45.892: INFO: wait on redis-master startup in kubectl-531 
Feb 16 13:39:45.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-gbvx4 redis-master --namespace=kubectl-531'
Feb 16 13:39:46.064: INFO: stderr: ""
Feb 16 13:39:46.064: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 16 Feb 13:39:44.601 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 16 Feb 13:39:44.601 # Server started, Redis version 3.2.12\n1:M 16 Feb 13:39:44.602 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 16 Feb 13:39:44.602 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Feb 16 13:39:46.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-531'
Feb 16 13:39:46.224: INFO: stderr: ""
Feb 16 13:39:46.224: INFO: stdout: "service/rm2 exposed\n"
Feb 16 13:39:46.232: INFO: Service rm2 in namespace kubectl-531 found.
STEP: exposing service
Feb 16 13:39:48.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-531'
Feb 16 13:39:48.458: INFO: stderr: ""
Feb 16 13:39:48.459: INFO: stdout: "service/rm3 exposed\n"
Feb 16 13:39:48.556: INFO: Service rm3 in namespace kubectl-531 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:39:50.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-531" for this suite.
Feb 16 13:40:14.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:40:14.734: INFO: namespace kubectl-531 deletion completed in 24.152605451s

• [SLOW TEST:39.297 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:40:14.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 16 13:40:14.942: INFO: Waiting up to 5m0s for pod "pod-2e59783b-e1ab-40a4-a413-66b60b8eec87" in namespace "emptydir-2" to be "success or failure"
Feb 16 13:40:14.953: INFO: Pod "pod-2e59783b-e1ab-40a4-a413-66b60b8eec87": Phase="Pending", Reason="", readiness=false. Elapsed: 11.549807ms
Feb 16 13:40:16.965: INFO: Pod "pod-2e59783b-e1ab-40a4-a413-66b60b8eec87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023453556s
Feb 16 13:40:18.971: INFO: Pod "pod-2e59783b-e1ab-40a4-a413-66b60b8eec87": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029663685s
Feb 16 13:40:20.979: INFO: Pod "pod-2e59783b-e1ab-40a4-a413-66b60b8eec87": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037193607s
Feb 16 13:40:22.986: INFO: Pod "pod-2e59783b-e1ab-40a4-a413-66b60b8eec87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.044378194s
STEP: Saw pod success
Feb 16 13:40:22.986: INFO: Pod "pod-2e59783b-e1ab-40a4-a413-66b60b8eec87" satisfied condition "success or failure"
Feb 16 13:40:22.992: INFO: Trying to get logs from node iruya-node pod pod-2e59783b-e1ab-40a4-a413-66b60b8eec87 container test-container: 
STEP: delete the pod
Feb 16 13:40:23.353: INFO: Waiting for pod pod-2e59783b-e1ab-40a4-a413-66b60b8eec87 to disappear
Feb 16 13:40:23.363: INFO: Pod pod-2e59783b-e1ab-40a4-a413-66b60b8eec87 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:40:23.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2" for this suite.
Feb 16 13:40:29.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:40:29.564: INFO: namespace emptydir-2 deletion completed in 6.176386383s

• [SLOW TEST:14.828 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:40:29.564: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7368.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7368.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7368.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7368.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 16 13:40:41.715: INFO: File wheezy_udp@dns-test-service-3.dns-7368.svc.cluster.local from pod  dns-7368/dns-test-2af070a7-1300-43a9-9646-63647e829e81 contains '' instead of 'foo.example.com.'
Feb 16 13:40:41.723: INFO: File jessie_udp@dns-test-service-3.dns-7368.svc.cluster.local from pod  dns-7368/dns-test-2af070a7-1300-43a9-9646-63647e829e81 contains '' instead of 'foo.example.com.'
Feb 16 13:40:41.723: INFO: Lookups using dns-7368/dns-test-2af070a7-1300-43a9-9646-63647e829e81 failed for: [wheezy_udp@dns-test-service-3.dns-7368.svc.cluster.local jessie_udp@dns-test-service-3.dns-7368.svc.cluster.local]

Feb 16 13:40:46.740: INFO: DNS probes using dns-test-2af070a7-1300-43a9-9646-63647e829e81 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7368.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7368.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7368.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7368.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 16 13:41:00.943: INFO: File wheezy_udp@dns-test-service-3.dns-7368.svc.cluster.local from pod  dns-7368/dns-test-57fdfa7f-4d4e-490b-aaf1-66849e3c7423 contains '' instead of 'bar.example.com.'
Feb 16 13:41:00.951: INFO: File jessie_udp@dns-test-service-3.dns-7368.svc.cluster.local from pod  dns-7368/dns-test-57fdfa7f-4d4e-490b-aaf1-66849e3c7423 contains '' instead of 'bar.example.com.'
Feb 16 13:41:00.951: INFO: Lookups using dns-7368/dns-test-57fdfa7f-4d4e-490b-aaf1-66849e3c7423 failed for: [wheezy_udp@dns-test-service-3.dns-7368.svc.cluster.local jessie_udp@dns-test-service-3.dns-7368.svc.cluster.local]

Feb 16 13:41:05.974: INFO: File wheezy_udp@dns-test-service-3.dns-7368.svc.cluster.local from pod  dns-7368/dns-test-57fdfa7f-4d4e-490b-aaf1-66849e3c7423 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 16 13:41:05.983: INFO: File jessie_udp@dns-test-service-3.dns-7368.svc.cluster.local from pod  dns-7368/dns-test-57fdfa7f-4d4e-490b-aaf1-66849e3c7423 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 16 13:41:05.983: INFO: Lookups using dns-7368/dns-test-57fdfa7f-4d4e-490b-aaf1-66849e3c7423 failed for: [wheezy_udp@dns-test-service-3.dns-7368.svc.cluster.local jessie_udp@dns-test-service-3.dns-7368.svc.cluster.local]

Feb 16 13:41:10.967: INFO: File wheezy_udp@dns-test-service-3.dns-7368.svc.cluster.local from pod  dns-7368/dns-test-57fdfa7f-4d4e-490b-aaf1-66849e3c7423 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 16 13:41:10.976: INFO: File jessie_udp@dns-test-service-3.dns-7368.svc.cluster.local from pod  dns-7368/dns-test-57fdfa7f-4d4e-490b-aaf1-66849e3c7423 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 16 13:41:10.977: INFO: Lookups using dns-7368/dns-test-57fdfa7f-4d4e-490b-aaf1-66849e3c7423 failed for: [wheezy_udp@dns-test-service-3.dns-7368.svc.cluster.local jessie_udp@dns-test-service-3.dns-7368.svc.cluster.local]

Feb 16 13:41:15.971: INFO: DNS probes using dns-test-57fdfa7f-4d4e-490b-aaf1-66849e3c7423 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7368.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7368.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7368.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-7368.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 16 13:41:28.322: INFO: File wheezy_udp@dns-test-service-3.dns-7368.svc.cluster.local from pod  dns-7368/dns-test-8fb5f4c7-5ae6-4cb9-81fb-b4be92dc0e98 contains '' instead of '10.105.204.183'
Feb 16 13:41:28.327: INFO: File jessie_udp@dns-test-service-3.dns-7368.svc.cluster.local from pod  dns-7368/dns-test-8fb5f4c7-5ae6-4cb9-81fb-b4be92dc0e98 contains '' instead of '10.105.204.183'
Feb 16 13:41:28.327: INFO: Lookups using dns-7368/dns-test-8fb5f4c7-5ae6-4cb9-81fb-b4be92dc0e98 failed for: [wheezy_udp@dns-test-service-3.dns-7368.svc.cluster.local jessie_udp@dns-test-service-3.dns-7368.svc.cluster.local]

Feb 16 13:41:33.353: INFO: DNS probes using dns-test-8fb5f4c7-5ae6-4cb9-81fb-b4be92dc0e98 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:41:33.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7368" for this suite.
Feb 16 13:41:41.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:41:41.694: INFO: namespace dns-7368 deletion completed in 8.193555626s

• [SLOW TEST:72.130 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:41:41.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Feb 16 13:41:41.799: INFO: Waiting up to 5m0s for pod "var-expansion-9ff38124-a741-4fbd-b4fc-d0b24410e1a7" in namespace "var-expansion-8246" to be "success or failure"
Feb 16 13:41:41.808: INFO: Pod "var-expansion-9ff38124-a741-4fbd-b4fc-d0b24410e1a7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.609007ms
Feb 16 13:41:43.831: INFO: Pod "var-expansion-9ff38124-a741-4fbd-b4fc-d0b24410e1a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031299138s
Feb 16 13:41:45.845: INFO: Pod "var-expansion-9ff38124-a741-4fbd-b4fc-d0b24410e1a7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046062181s
Feb 16 13:41:47.860: INFO: Pod "var-expansion-9ff38124-a741-4fbd-b4fc-d0b24410e1a7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060498731s
Feb 16 13:41:49.874: INFO: Pod "var-expansion-9ff38124-a741-4fbd-b4fc-d0b24410e1a7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.074081963s
Feb 16 13:41:51.890: INFO: Pod "var-expansion-9ff38124-a741-4fbd-b4fc-d0b24410e1a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.091014387s
STEP: Saw pod success
Feb 16 13:41:51.891: INFO: Pod "var-expansion-9ff38124-a741-4fbd-b4fc-d0b24410e1a7" satisfied condition "success or failure"
Feb 16 13:41:51.901: INFO: Trying to get logs from node iruya-node pod var-expansion-9ff38124-a741-4fbd-b4fc-d0b24410e1a7 container dapi-container: 
STEP: delete the pod
Feb 16 13:41:51.969: INFO: Waiting for pod var-expansion-9ff38124-a741-4fbd-b4fc-d0b24410e1a7 to disappear
Feb 16 13:41:51.994: INFO: Pod var-expansion-9ff38124-a741-4fbd-b4fc-d0b24410e1a7 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:41:51.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-8246" for this suite.
Feb 16 13:41:58.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:41:58.192: INFO: namespace var-expansion-8246 deletion completed in 6.188010774s

• [SLOW TEST:16.498 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:41:58.192: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:42:58.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5607" for this suite.
Feb 16 13:43:20.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:43:20.477: INFO: namespace container-probe-5607 deletion completed in 22.173557995s

• [SLOW TEST:82.285 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:43:20.477: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 16 13:43:20.595: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e89c0a72-ad6f-4402-87e3-8cb7aa4d02a1" in namespace "projected-2843" to be "success or failure"
Feb 16 13:43:20.614: INFO: Pod "downwardapi-volume-e89c0a72-ad6f-4402-87e3-8cb7aa4d02a1": Phase="Pending", Reason="", readiness=false. Elapsed: 18.479614ms
Feb 16 13:43:22.630: INFO: Pod "downwardapi-volume-e89c0a72-ad6f-4402-87e3-8cb7aa4d02a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034071198s
Feb 16 13:43:24.639: INFO: Pod "downwardapi-volume-e89c0a72-ad6f-4402-87e3-8cb7aa4d02a1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043550762s
Feb 16 13:43:26.653: INFO: Pod "downwardapi-volume-e89c0a72-ad6f-4402-87e3-8cb7aa4d02a1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057804457s
Feb 16 13:43:28.667: INFO: Pod "downwardapi-volume-e89c0a72-ad6f-4402-87e3-8cb7aa4d02a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.071178521s
STEP: Saw pod success
Feb 16 13:43:28.667: INFO: Pod "downwardapi-volume-e89c0a72-ad6f-4402-87e3-8cb7aa4d02a1" satisfied condition "success or failure"
Feb 16 13:43:28.672: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-e89c0a72-ad6f-4402-87e3-8cb7aa4d02a1 container client-container: 
STEP: delete the pod
Feb 16 13:43:28.770: INFO: Waiting for pod downwardapi-volume-e89c0a72-ad6f-4402-87e3-8cb7aa4d02a1 to disappear
Feb 16 13:43:28.872: INFO: Pod downwardapi-volume-e89c0a72-ad6f-4402-87e3-8cb7aa4d02a1 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:43:28.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2843" for this suite.
Feb 16 13:43:34.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:43:35.086: INFO: namespace projected-2843 deletion completed in 6.201773938s

• [SLOW TEST:14.609 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:43:35.086: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-dhg8
STEP: Creating a pod to test atomic-volume-subpath
Feb 16 13:43:35.251: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-dhg8" in namespace "subpath-9868" to be "success or failure"
Feb 16 13:43:35.263: INFO: Pod "pod-subpath-test-configmap-dhg8": Phase="Pending", Reason="", readiness=false. Elapsed: 11.416812ms
Feb 16 13:43:37.273: INFO: Pod "pod-subpath-test-configmap-dhg8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021364572s
Feb 16 13:43:39.280: INFO: Pod "pod-subpath-test-configmap-dhg8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028955657s
Feb 16 13:43:41.289: INFO: Pod "pod-subpath-test-configmap-dhg8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037811417s
Feb 16 13:43:43.305: INFO: Pod "pod-subpath-test-configmap-dhg8": Phase="Running", Reason="", readiness=true. Elapsed: 8.05404328s
Feb 16 13:43:45.316: INFO: Pod "pod-subpath-test-configmap-dhg8": Phase="Running", Reason="", readiness=true. Elapsed: 10.065178384s
Feb 16 13:43:47.328: INFO: Pod "pod-subpath-test-configmap-dhg8": Phase="Running", Reason="", readiness=true. Elapsed: 12.076221615s
Feb 16 13:43:49.337: INFO: Pod "pod-subpath-test-configmap-dhg8": Phase="Running", Reason="", readiness=true. Elapsed: 14.085216584s
Feb 16 13:43:51.349: INFO: Pod "pod-subpath-test-configmap-dhg8": Phase="Running", Reason="", readiness=true. Elapsed: 16.097327793s
Feb 16 13:43:53.357: INFO: Pod "pod-subpath-test-configmap-dhg8": Phase="Running", Reason="", readiness=true. Elapsed: 18.105639846s
Feb 16 13:43:55.369: INFO: Pod "pod-subpath-test-configmap-dhg8": Phase="Running", Reason="", readiness=true. Elapsed: 20.118194586s
Feb 16 13:43:57.378: INFO: Pod "pod-subpath-test-configmap-dhg8": Phase="Running", Reason="", readiness=true. Elapsed: 22.126990547s
Feb 16 13:43:59.389: INFO: Pod "pod-subpath-test-configmap-dhg8": Phase="Running", Reason="", readiness=true. Elapsed: 24.137928141s
Feb 16 13:44:01.398: INFO: Pod "pod-subpath-test-configmap-dhg8": Phase="Running", Reason="", readiness=true. Elapsed: 26.147043274s
Feb 16 13:44:03.413: INFO: Pod "pod-subpath-test-configmap-dhg8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.161430194s
STEP: Saw pod success
Feb 16 13:44:03.413: INFO: Pod "pod-subpath-test-configmap-dhg8" satisfied condition "success or failure"
Feb 16 13:44:03.418: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-dhg8 container test-container-subpath-configmap-dhg8: 
STEP: delete the pod
Feb 16 13:44:03.843: INFO: Waiting for pod pod-subpath-test-configmap-dhg8 to disappear
Feb 16 13:44:03.849: INFO: Pod pod-subpath-test-configmap-dhg8 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-dhg8
Feb 16 13:44:03.849: INFO: Deleting pod "pod-subpath-test-configmap-dhg8" in namespace "subpath-9868"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:44:03.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9868" for this suite.
Feb 16 13:44:09.908: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:44:10.063: INFO: namespace subpath-9868 deletion completed in 6.199574466s

• [SLOW TEST:34.976 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:44:10.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 16 13:44:10.188: INFO: Waiting up to 5m0s for pod "downwardapi-volume-24314329-b325-4206-ab5b-be9587ba0573" in namespace "projected-984" to be "success or failure"
Feb 16 13:44:10.198: INFO: Pod "downwardapi-volume-24314329-b325-4206-ab5b-be9587ba0573": Phase="Pending", Reason="", readiness=false. Elapsed: 9.370995ms
Feb 16 13:44:12.255: INFO: Pod "downwardapi-volume-24314329-b325-4206-ab5b-be9587ba0573": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066447973s
Feb 16 13:44:14.262: INFO: Pod "downwardapi-volume-24314329-b325-4206-ab5b-be9587ba0573": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073954458s
Feb 16 13:44:16.269: INFO: Pod "downwardapi-volume-24314329-b325-4206-ab5b-be9587ba0573": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080582036s
Feb 16 13:44:18.292: INFO: Pod "downwardapi-volume-24314329-b325-4206-ab5b-be9587ba0573": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.103547255s
STEP: Saw pod success
Feb 16 13:44:18.292: INFO: Pod "downwardapi-volume-24314329-b325-4206-ab5b-be9587ba0573" satisfied condition "success or failure"
Feb 16 13:44:18.296: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-24314329-b325-4206-ab5b-be9587ba0573 container client-container: 
STEP: delete the pod
Feb 16 13:44:18.508: INFO: Waiting for pod downwardapi-volume-24314329-b325-4206-ab5b-be9587ba0573 to disappear
Feb 16 13:44:18.525: INFO: Pod downwardapi-volume-24314329-b325-4206-ab5b-be9587ba0573 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:44:18.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-984" for this suite.
Feb 16 13:44:24.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:44:24.651: INFO: namespace projected-984 deletion completed in 6.119533704s

• [SLOW TEST:14.588 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:44:24.652: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb 16 13:44:24.778: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:44:38.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8427" for this suite.
Feb 16 13:44:44.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:44:44.423: INFO: namespace init-container-8427 deletion completed in 6.139204499s

• [SLOW TEST:19.771 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:44:44.423: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 16 13:44:44.582: INFO: Waiting up to 5m0s for pod "downwardapi-volume-57e88f0f-7f04-482a-8d84-e7ca17d06c3c" in namespace "downward-api-4180" to be "success or failure"
Feb 16 13:44:44.592: INFO: Pod "downwardapi-volume-57e88f0f-7f04-482a-8d84-e7ca17d06c3c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.044273ms
Feb 16 13:44:46.609: INFO: Pod "downwardapi-volume-57e88f0f-7f04-482a-8d84-e7ca17d06c3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026310818s
Feb 16 13:44:48.632: INFO: Pod "downwardapi-volume-57e88f0f-7f04-482a-8d84-e7ca17d06c3c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049572473s
Feb 16 13:44:50.646: INFO: Pod "downwardapi-volume-57e88f0f-7f04-482a-8d84-e7ca17d06c3c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063546371s
Feb 16 13:44:52.657: INFO: Pod "downwardapi-volume-57e88f0f-7f04-482a-8d84-e7ca17d06c3c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.074965747s
Feb 16 13:44:54.671: INFO: Pod "downwardapi-volume-57e88f0f-7f04-482a-8d84-e7ca17d06c3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.088748938s
STEP: Saw pod success
Feb 16 13:44:54.671: INFO: Pod "downwardapi-volume-57e88f0f-7f04-482a-8d84-e7ca17d06c3c" satisfied condition "success or failure"
Feb 16 13:44:54.680: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-57e88f0f-7f04-482a-8d84-e7ca17d06c3c container client-container: 
STEP: delete the pod
Feb 16 13:44:54.768: INFO: Waiting for pod downwardapi-volume-57e88f0f-7f04-482a-8d84-e7ca17d06c3c to disappear
Feb 16 13:44:54.774: INFO: Pod downwardapi-volume-57e88f0f-7f04-482a-8d84-e7ca17d06c3c no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:44:54.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4180" for this suite.
Feb 16 13:45:00.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:45:00.949: INFO: namespace downward-api-4180 deletion completed in 6.169860671s

• [SLOW TEST:16.526 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:45:00.949: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Feb 16 13:45:01.074: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:45:26.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3823" for this suite.
Feb 16 13:45:32.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:45:32.833: INFO: namespace pods-3823 deletion completed in 6.169355067s

• [SLOW TEST:31.885 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:45:32.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8007.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8007.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8007.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8007.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8007.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8007.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8007.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8007.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8007.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8007.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8007.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8007.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8007.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 148.9.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.9.148_udp@PTR;check="$$(dig +tcp +noall +answer +search 148.9.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.9.148_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8007.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8007.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8007.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8007.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8007.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8007.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8007.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8007.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8007.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8007.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8007.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8007.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8007.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 148.9.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.9.148_udp@PTR;check="$$(dig +tcp +noall +answer +search 148.9.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.9.148_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 16 13:45:45.141: INFO: Unable to read wheezy_udp@dns-test-service.dns-8007.svc.cluster.local from pod dns-8007/dns-test-e649ac1b-e017-4633-9ae6-e5014907caae: the server could not find the requested resource (get pods dns-test-e649ac1b-e017-4633-9ae6-e5014907caae)
Feb 16 13:45:45.151: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8007.svc.cluster.local from pod dns-8007/dns-test-e649ac1b-e017-4633-9ae6-e5014907caae: the server could not find the requested resource (get pods dns-test-e649ac1b-e017-4633-9ae6-e5014907caae)
Feb 16 13:45:45.159: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8007.svc.cluster.local from pod dns-8007/dns-test-e649ac1b-e017-4633-9ae6-e5014907caae: the server could not find the requested resource (get pods dns-test-e649ac1b-e017-4633-9ae6-e5014907caae)
Feb 16 13:45:45.183: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8007.svc.cluster.local from pod dns-8007/dns-test-e649ac1b-e017-4633-9ae6-e5014907caae: the server could not find the requested resource (get pods dns-test-e649ac1b-e017-4633-9ae6-e5014907caae)
Feb 16 13:45:45.196: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-8007.svc.cluster.local from pod dns-8007/dns-test-e649ac1b-e017-4633-9ae6-e5014907caae: the server could not find the requested resource (get pods dns-test-e649ac1b-e017-4633-9ae6-e5014907caae)
Feb 16 13:45:45.206: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-8007.svc.cluster.local from pod dns-8007/dns-test-e649ac1b-e017-4633-9ae6-e5014907caae: the server could not find the requested resource (get pods dns-test-e649ac1b-e017-4633-9ae6-e5014907caae)
Feb 16 13:45:45.221: INFO: Unable to read wheezy_udp@PodARecord from pod dns-8007/dns-test-e649ac1b-e017-4633-9ae6-e5014907caae: the server could not find the requested resource (get pods dns-test-e649ac1b-e017-4633-9ae6-e5014907caae)
Feb 16 13:45:45.229: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-8007/dns-test-e649ac1b-e017-4633-9ae6-e5014907caae: the server could not find the requested resource (get pods dns-test-e649ac1b-e017-4633-9ae6-e5014907caae)
Feb 16 13:45:45.236: INFO: Unable to read 10.96.9.148_udp@PTR from pod dns-8007/dns-test-e649ac1b-e017-4633-9ae6-e5014907caae: the server could not find the requested resource (get pods dns-test-e649ac1b-e017-4633-9ae6-e5014907caae)
Feb 16 13:45:45.245: INFO: Unable to read 10.96.9.148_tcp@PTR from pod dns-8007/dns-test-e649ac1b-e017-4633-9ae6-e5014907caae: the server could not find the requested resource (get pods dns-test-e649ac1b-e017-4633-9ae6-e5014907caae)
Feb 16 13:45:45.253: INFO: Unable to read jessie_udp@dns-test-service.dns-8007.svc.cluster.local from pod dns-8007/dns-test-e649ac1b-e017-4633-9ae6-e5014907caae: the server could not find the requested resource (get pods dns-test-e649ac1b-e017-4633-9ae6-e5014907caae)
Feb 16 13:45:45.257: INFO: Unable to read jessie_tcp@dns-test-service.dns-8007.svc.cluster.local from pod dns-8007/dns-test-e649ac1b-e017-4633-9ae6-e5014907caae: the server could not find the requested resource (get pods dns-test-e649ac1b-e017-4633-9ae6-e5014907caae)
Feb 16 13:45:45.262: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8007.svc.cluster.local from pod dns-8007/dns-test-e649ac1b-e017-4633-9ae6-e5014907caae: the server could not find the requested resource (get pods dns-test-e649ac1b-e017-4633-9ae6-e5014907caae)
Feb 16 13:45:45.268: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8007.svc.cluster.local from pod dns-8007/dns-test-e649ac1b-e017-4633-9ae6-e5014907caae: the server could not find the requested resource (get pods dns-test-e649ac1b-e017-4633-9ae6-e5014907caae)
Feb 16 13:45:45.275: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-8007.svc.cluster.local from pod dns-8007/dns-test-e649ac1b-e017-4633-9ae6-e5014907caae: the server could not find the requested resource (get pods dns-test-e649ac1b-e017-4633-9ae6-e5014907caae)
Feb 16 13:45:45.280: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-8007.svc.cluster.local from pod dns-8007/dns-test-e649ac1b-e017-4633-9ae6-e5014907caae: the server could not find the requested resource (get pods dns-test-e649ac1b-e017-4633-9ae6-e5014907caae)
Feb 16 13:45:45.286: INFO: Unable to read jessie_udp@PodARecord from pod dns-8007/dns-test-e649ac1b-e017-4633-9ae6-e5014907caae: the server could not find the requested resource (get pods dns-test-e649ac1b-e017-4633-9ae6-e5014907caae)
Feb 16 13:45:45.292: INFO: Unable to read jessie_tcp@PodARecord from pod dns-8007/dns-test-e649ac1b-e017-4633-9ae6-e5014907caae: the server could not find the requested resource (get pods dns-test-e649ac1b-e017-4633-9ae6-e5014907caae)
Feb 16 13:45:45.297: INFO: Unable to read 10.96.9.148_udp@PTR from pod dns-8007/dns-test-e649ac1b-e017-4633-9ae6-e5014907caae: the server could not find the requested resource (get pods dns-test-e649ac1b-e017-4633-9ae6-e5014907caae)
Feb 16 13:45:45.302: INFO: Unable to read 10.96.9.148_tcp@PTR from pod dns-8007/dns-test-e649ac1b-e017-4633-9ae6-e5014907caae: the server could not find the requested resource (get pods dns-test-e649ac1b-e017-4633-9ae6-e5014907caae)
Feb 16 13:45:45.302: INFO: Lookups using dns-8007/dns-test-e649ac1b-e017-4633-9ae6-e5014907caae failed for: [wheezy_udp@dns-test-service.dns-8007.svc.cluster.local wheezy_tcp@dns-test-service.dns-8007.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8007.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8007.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-8007.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-8007.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.96.9.148_udp@PTR 10.96.9.148_tcp@PTR jessie_udp@dns-test-service.dns-8007.svc.cluster.local jessie_tcp@dns-test-service.dns-8007.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8007.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8007.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-8007.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-8007.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.96.9.148_udp@PTR 10.96.9.148_tcp@PTR]

Feb 16 13:45:50.437: INFO: DNS probes using dns-8007/dns-test-e649ac1b-e017-4633-9ae6-e5014907caae succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:45:50.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8007" for this suite.
Feb 16 13:45:56.857: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:45:56.959: INFO: namespace dns-8007 deletion completed in 6.215076112s

• [SLOW TEST:24.125 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:45:56.960: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-5680b293-4f60-4fd7-b270-74cc8a20eb1f
STEP: Creating configMap with name cm-test-opt-upd-0097a7f8-5c01-42ab-9767-d6296ba75128
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-5680b293-4f60-4fd7-b270-74cc8a20eb1f
STEP: Updating configmap cm-test-opt-upd-0097a7f8-5c01-42ab-9767-d6296ba75128
STEP: Creating configMap with name cm-test-opt-create-f93822a5-4687-4d5a-bca3-4e7d289b00d5
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:46:13.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1451" for this suite.
Feb 16 13:46:47.758: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:46:47.907: INFO: namespace configmap-1451 deletion completed in 34.526388799s

• [SLOW TEST:50.948 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:46:47.908: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb 16 13:46:48.017: INFO: Waiting up to 5m0s for pod "downward-api-ed6b6760-4b17-4aec-8ec5-a956baae4255" in namespace "downward-api-8859" to be "success or failure"
Feb 16 13:46:48.031: INFO: Pod "downward-api-ed6b6760-4b17-4aec-8ec5-a956baae4255": Phase="Pending", Reason="", readiness=false. Elapsed: 13.646398ms
Feb 16 13:46:50.052: INFO: Pod "downward-api-ed6b6760-4b17-4aec-8ec5-a956baae4255": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035355828s
Feb 16 13:46:52.060: INFO: Pod "downward-api-ed6b6760-4b17-4aec-8ec5-a956baae4255": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042826462s
Feb 16 13:46:54.065: INFO: Pod "downward-api-ed6b6760-4b17-4aec-8ec5-a956baae4255": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047447656s
Feb 16 13:46:56.071: INFO: Pod "downward-api-ed6b6760-4b17-4aec-8ec5-a956baae4255": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05418993s
Feb 16 13:46:58.079: INFO: Pod "downward-api-ed6b6760-4b17-4aec-8ec5-a956baae4255": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.061471031s
STEP: Saw pod success
Feb 16 13:46:58.079: INFO: Pod "downward-api-ed6b6760-4b17-4aec-8ec5-a956baae4255" satisfied condition "success or failure"
Feb 16 13:46:58.081: INFO: Trying to get logs from node iruya-node pod downward-api-ed6b6760-4b17-4aec-8ec5-a956baae4255 container dapi-container: 
STEP: delete the pod
Feb 16 13:46:58.265: INFO: Waiting for pod downward-api-ed6b6760-4b17-4aec-8ec5-a956baae4255 to disappear
Feb 16 13:46:58.274: INFO: Pod downward-api-ed6b6760-4b17-4aec-8ec5-a956baae4255 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:46:58.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8859" for this suite.
Feb 16 13:47:04.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:47:04.485: INFO: namespace downward-api-8859 deletion completed in 6.199420473s

• [SLOW TEST:16.577 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:47:04.485: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 16 13:47:04.616: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b01a1dee-3bb2-4ab0-96a6-fdfd88347c06" in namespace "downward-api-5684" to be "success or failure"
Feb 16 13:47:04.627: INFO: Pod "downwardapi-volume-b01a1dee-3bb2-4ab0-96a6-fdfd88347c06": Phase="Pending", Reason="", readiness=false. Elapsed: 10.59612ms
Feb 16 13:47:06.640: INFO: Pod "downwardapi-volume-b01a1dee-3bb2-4ab0-96a6-fdfd88347c06": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023079147s
Feb 16 13:47:08.657: INFO: Pod "downwardapi-volume-b01a1dee-3bb2-4ab0-96a6-fdfd88347c06": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040204569s
Feb 16 13:47:10.670: INFO: Pod "downwardapi-volume-b01a1dee-3bb2-4ab0-96a6-fdfd88347c06": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053523535s
Feb 16 13:47:12.698: INFO: Pod "downwardapi-volume-b01a1dee-3bb2-4ab0-96a6-fdfd88347c06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.081937402s
STEP: Saw pod success
Feb 16 13:47:12.698: INFO: Pod "downwardapi-volume-b01a1dee-3bb2-4ab0-96a6-fdfd88347c06" satisfied condition "success or failure"
Feb 16 13:47:12.703: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-b01a1dee-3bb2-4ab0-96a6-fdfd88347c06 container client-container: 
STEP: delete the pod
Feb 16 13:47:12.749: INFO: Waiting for pod downwardapi-volume-b01a1dee-3bb2-4ab0-96a6-fdfd88347c06 to disappear
Feb 16 13:47:12.790: INFO: Pod downwardapi-volume-b01a1dee-3bb2-4ab0-96a6-fdfd88347c06 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:47:12.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5684" for this suite.
Feb 16 13:47:18.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:47:18.995: INFO: namespace downward-api-5684 deletion completed in 6.20074993s

• [SLOW TEST:14.510 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:47:18.996: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-6a9c0a0c-527f-4c67-9966-e8f265b40bc4
STEP: Creating a pod to test consume secrets
Feb 16 13:47:19.146: INFO: Waiting up to 5m0s for pod "pod-secrets-082d36b9-4231-4747-8216-e322ed7ac1b8" in namespace "secrets-1774" to be "success or failure"
Feb 16 13:47:19.160: INFO: Pod "pod-secrets-082d36b9-4231-4747-8216-e322ed7ac1b8": Phase="Pending", Reason="", readiness=false. Elapsed: 13.557046ms
Feb 16 13:47:21.173: INFO: Pod "pod-secrets-082d36b9-4231-4747-8216-e322ed7ac1b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026518022s
Feb 16 13:47:23.180: INFO: Pod "pod-secrets-082d36b9-4231-4747-8216-e322ed7ac1b8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033983527s
Feb 16 13:47:25.187: INFO: Pod "pod-secrets-082d36b9-4231-4747-8216-e322ed7ac1b8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041213238s
Feb 16 13:47:27.197: INFO: Pod "pod-secrets-082d36b9-4231-4747-8216-e322ed7ac1b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.050798816s
STEP: Saw pod success
Feb 16 13:47:27.197: INFO: Pod "pod-secrets-082d36b9-4231-4747-8216-e322ed7ac1b8" satisfied condition "success or failure"
Feb 16 13:47:27.200: INFO: Trying to get logs from node iruya-node pod pod-secrets-082d36b9-4231-4747-8216-e322ed7ac1b8 container secret-env-test: 
STEP: delete the pod
Feb 16 13:47:27.265: INFO: Waiting for pod pod-secrets-082d36b9-4231-4747-8216-e322ed7ac1b8 to disappear
Feb 16 13:47:27.276: INFO: Pod pod-secrets-082d36b9-4231-4747-8216-e322ed7ac1b8 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:47:27.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1774" for this suite.
Feb 16 13:47:33.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:47:33.527: INFO: namespace secrets-1774 deletion completed in 6.243734297s

• [SLOW TEST:14.531 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:47:33.529: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Feb 16 13:47:33.782: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5229,SelfLink:/api/v1/namespaces/watch-5229/configmaps/e2e-watch-test-label-changed,UID:1b9edfe4-ff42-424f-8564-51aafcd111c6,ResourceVersion:24576934,Generation:0,CreationTimestamp:2020-02-16 13:47:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 16 13:47:33.783: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5229,SelfLink:/api/v1/namespaces/watch-5229/configmaps/e2e-watch-test-label-changed,UID:1b9edfe4-ff42-424f-8564-51aafcd111c6,ResourceVersion:24576935,Generation:0,CreationTimestamp:2020-02-16 13:47:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb 16 13:47:33.783: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5229,SelfLink:/api/v1/namespaces/watch-5229/configmaps/e2e-watch-test-label-changed,UID:1b9edfe4-ff42-424f-8564-51aafcd111c6,ResourceVersion:24576936,Generation:0,CreationTimestamp:2020-02-16 13:47:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Feb 16 13:47:43.884: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5229,SelfLink:/api/v1/namespaces/watch-5229/configmaps/e2e-watch-test-label-changed,UID:1b9edfe4-ff42-424f-8564-51aafcd111c6,ResourceVersion:24576952,Generation:0,CreationTimestamp:2020-02-16 13:47:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 16 13:47:43.885: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5229,SelfLink:/api/v1/namespaces/watch-5229/configmaps/e2e-watch-test-label-changed,UID:1b9edfe4-ff42-424f-8564-51aafcd111c6,ResourceVersion:24576953,Generation:0,CreationTimestamp:2020-02-16 13:47:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Feb 16 13:47:43.885: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5229,SelfLink:/api/v1/namespaces/watch-5229/configmaps/e2e-watch-test-label-changed,UID:1b9edfe4-ff42-424f-8564-51aafcd111c6,ResourceVersion:24576954,Generation:0,CreationTimestamp:2020-02-16 13:47:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:47:43.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5229" for this suite.
Feb 16 13:47:49.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:47:50.092: INFO: namespace watch-5229 deletion completed in 6.197259725s

• [SLOW TEST:16.563 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:47:50.093: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 16 13:47:50.192: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e48af27e-f4e1-47c2-8783-96163704204f" in namespace "projected-991" to be "success or failure"
Feb 16 13:47:50.197: INFO: Pod "downwardapi-volume-e48af27e-f4e1-47c2-8783-96163704204f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.488519ms
Feb 16 13:47:52.207: INFO: Pod "downwardapi-volume-e48af27e-f4e1-47c2-8783-96163704204f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015383316s
Feb 16 13:47:54.219: INFO: Pod "downwardapi-volume-e48af27e-f4e1-47c2-8783-96163704204f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026984038s
Feb 16 13:47:56.224: INFO: Pod "downwardapi-volume-e48af27e-f4e1-47c2-8783-96163704204f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032385227s
Feb 16 13:47:58.236: INFO: Pod "downwardapi-volume-e48af27e-f4e1-47c2-8783-96163704204f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.044271335s
Feb 16 13:48:00.245: INFO: Pod "downwardapi-volume-e48af27e-f4e1-47c2-8783-96163704204f": Phase="Running", Reason="", readiness=true. Elapsed: 10.053565222s
Feb 16 13:48:02.257: INFO: Pod "downwardapi-volume-e48af27e-f4e1-47c2-8783-96163704204f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.064871568s
STEP: Saw pod success
Feb 16 13:48:02.257: INFO: Pod "downwardapi-volume-e48af27e-f4e1-47c2-8783-96163704204f" satisfied condition "success or failure"
Feb 16 13:48:02.259: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-e48af27e-f4e1-47c2-8783-96163704204f container client-container: 
STEP: delete the pod
Feb 16 13:48:02.298: INFO: Waiting for pod downwardapi-volume-e48af27e-f4e1-47c2-8783-96163704204f to disappear
Feb 16 13:48:02.345: INFO: Pod downwardapi-volume-e48af27e-f4e1-47c2-8783-96163704204f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:48:02.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-991" for this suite.
Feb 16 13:48:08.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:48:08.508: INFO: namespace projected-991 deletion completed in 6.156912685s

• [SLOW TEST:18.416 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:48:08.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Feb 16 13:48:08.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-926 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Feb 16 13:48:18.086: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0216 13:48:16.699848    1216 log.go:172] (0xc0005a4160) (0xc000732500) Create stream\nI0216 13:48:16.700057    1216 log.go:172] (0xc0005a4160) (0xc000732500) Stream added, broadcasting: 1\nI0216 13:48:16.721282    1216 log.go:172] (0xc0005a4160) Reply frame received for 1\nI0216 13:48:16.721401    1216 log.go:172] (0xc0005a4160) (0xc0007320a0) Create stream\nI0216 13:48:16.721418    1216 log.go:172] (0xc0005a4160) (0xc0007320a0) Stream added, broadcasting: 3\nI0216 13:48:16.727173    1216 log.go:172] (0xc0005a4160) Reply frame received for 3\nI0216 13:48:16.727228    1216 log.go:172] (0xc0005a4160) (0xc00020e000) Create stream\nI0216 13:48:16.727247    1216 log.go:172] (0xc0005a4160) (0xc00020e000) Stream added, broadcasting: 5\nI0216 13:48:16.731311    1216 log.go:172] (0xc0005a4160) Reply frame received for 5\nI0216 13:48:16.731336    1216 log.go:172] (0xc0005a4160) (0xc000258000) Create stream\nI0216 13:48:16.731344    1216 log.go:172] (0xc0005a4160) (0xc000258000) Stream added, broadcasting: 7\nI0216 13:48:16.734208    1216 log.go:172] (0xc0005a4160) Reply frame received for 7\nI0216 13:48:16.734492    1216 log.go:172] (0xc0007320a0) (3) Writing data frame\nI0216 13:48:16.734785    1216 log.go:172] (0xc0007320a0) (3) Writing data frame\nI0216 13:48:16.764684    1216 log.go:172] (0xc0005a4160) Data frame received for 5\nI0216 13:48:16.764730    1216 log.go:172] (0xc00020e000) (5) Data frame handling\nI0216 13:48:16.764756    1216 log.go:172] (0xc00020e000) (5) Data frame sent\nI0216 13:48:16.765990    1216 log.go:172] (0xc0005a4160) Data frame received for 5\nI0216 13:48:16.766007    1216 log.go:172] (0xc00020e000) (5) Data frame handling\nI0216 13:48:16.766015    1216 log.go:172] (0xc00020e000) (5) Data frame sent\nI0216 13:48:18.060916    1216 log.go:172] (0xc0005a4160) (0xc0007320a0) Stream removed, broadcasting: 3\nI0216 13:48:18.061030    1216 log.go:172] (0xc0005a4160) Data frame received for 1\nI0216 13:48:18.061045    1216 log.go:172] (0xc000732500) (1) Data frame handling\nI0216 13:48:18.061054    1216 log.go:172] (0xc000732500) (1) Data frame sent\nI0216 13:48:18.061062    1216 log.go:172] (0xc0005a4160) (0xc000732500) Stream removed, broadcasting: 1\nI0216 13:48:18.061119    1216 log.go:172] (0xc0005a4160) (0xc00020e000) Stream removed, broadcasting: 5\nI0216 13:48:18.061145    1216 log.go:172] (0xc0005a4160) (0xc000258000) Stream removed, broadcasting: 7\nI0216 13:48:18.061162    1216 log.go:172] (0xc0005a4160) Go away received\nI0216 13:48:18.061464    1216 log.go:172] (0xc0005a4160) (0xc000732500) Stream removed, broadcasting: 1\nI0216 13:48:18.061481    1216 log.go:172] (0xc0005a4160) (0xc0007320a0) Stream removed, broadcasting: 3\nI0216 13:48:18.061491    1216 log.go:172] (0xc0005a4160) (0xc00020e000) Stream removed, broadcasting: 5\nI0216 13:48:18.061504    1216 log.go:172] (0xc0005a4160) (0xc000258000) Stream removed, broadcasting: 7\n"
Feb 16 13:48:18.086: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:48:20.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-926" for this suite.
Feb 16 13:48:26.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:48:26.292: INFO: namespace kubectl-926 deletion completed in 6.181137158s

• [SLOW TEST:17.783 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:48:26.293: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 16 13:48:26.438: INFO: Waiting up to 5m0s for pod "pod-d165861d-cfbe-4910-820d-6010d0978b09" in namespace "emptydir-2026" to be "success or failure"
Feb 16 13:48:26.477: INFO: Pod "pod-d165861d-cfbe-4910-820d-6010d0978b09": Phase="Pending", Reason="", readiness=false. Elapsed: 39.004366ms
Feb 16 13:48:28.497: INFO: Pod "pod-d165861d-cfbe-4910-820d-6010d0978b09": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059702621s
Feb 16 13:48:30.506: INFO: Pod "pod-d165861d-cfbe-4910-820d-6010d0978b09": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068268348s
Feb 16 13:48:32.528: INFO: Pod "pod-d165861d-cfbe-4910-820d-6010d0978b09": Phase="Pending", Reason="", readiness=false. Elapsed: 6.089779835s
Feb 16 13:48:34.547: INFO: Pod "pod-d165861d-cfbe-4910-820d-6010d0978b09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.10937212s
STEP: Saw pod success
Feb 16 13:48:34.547: INFO: Pod "pod-d165861d-cfbe-4910-820d-6010d0978b09" satisfied condition "success or failure"
Feb 16 13:48:34.555: INFO: Trying to get logs from node iruya-node pod pod-d165861d-cfbe-4910-820d-6010d0978b09 container test-container: 
STEP: delete the pod
Feb 16 13:48:34.688: INFO: Waiting for pod pod-d165861d-cfbe-4910-820d-6010d0978b09 to disappear
Feb 16 13:48:34.702: INFO: Pod pod-d165861d-cfbe-4910-820d-6010d0978b09 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:48:34.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2026" for this suite.
Feb 16 13:48:40.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:48:40.977: INFO: namespace emptydir-2026 deletion completed in 6.264439578s

• [SLOW TEST:14.684 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:48:40.978: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Feb 16 13:48:41.736: INFO: created pod pod-service-account-defaultsa
Feb 16 13:48:41.736: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Feb 16 13:48:41.758: INFO: created pod pod-service-account-mountsa
Feb 16 13:48:41.758: INFO: pod pod-service-account-mountsa service account token volume mount: true
Feb 16 13:48:41.775: INFO: created pod pod-service-account-nomountsa
Feb 16 13:48:41.776: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Feb 16 13:48:41.788: INFO: created pod pod-service-account-defaultsa-mountspec
Feb 16 13:48:41.788: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Feb 16 13:48:41.817: INFO: created pod pod-service-account-mountsa-mountspec
Feb 16 13:48:41.817: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Feb 16 13:48:41.967: INFO: created pod pod-service-account-nomountsa-mountspec
Feb 16 13:48:41.967: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Feb 16 13:48:42.043: INFO: created pod pod-service-account-defaultsa-nomountspec
Feb 16 13:48:42.043: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Feb 16 13:48:42.167: INFO: created pod pod-service-account-mountsa-nomountspec
Feb 16 13:48:42.167: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Feb 16 13:48:42.194: INFO: created pod pod-service-account-nomountsa-nomountspec
Feb 16 13:48:42.194: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:48:42.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-4857" for this suite.
Feb 16 13:49:08.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:49:08.153: INFO: namespace svcaccounts-4857 deletion completed in 25.92463798s

• [SLOW TEST:27.175 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:49:08.154: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 16 13:49:08.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-5525'
Feb 16 13:49:08.414: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 16 13:49:08.414: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Feb 16 13:49:08.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-5525'
Feb 16 13:49:08.556: INFO: stderr: ""
Feb 16 13:49:08.557: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:49:08.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5525" for this suite.
Feb 16 13:49:14.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:49:14.717: INFO: namespace kubectl-5525 deletion completed in 6.151315966s

• [SLOW TEST:6.564 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:49:14.718: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Feb 16 13:49:14.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2762'
Feb 16 13:49:15.114: INFO: stderr: ""
Feb 16 13:49:15.114: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 16 13:49:15.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2762'
Feb 16 13:49:15.240: INFO: stderr: ""
Feb 16 13:49:15.240: INFO: stdout: "update-demo-nautilus-9jtln update-demo-nautilus-zf2px "
Feb 16 13:49:15.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9jtln -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2762'
Feb 16 13:49:15.370: INFO: stderr: ""
Feb 16 13:49:15.370: INFO: stdout: ""
Feb 16 13:49:15.370: INFO: update-demo-nautilus-9jtln is created but not running
Feb 16 13:49:20.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2762'
Feb 16 13:49:20.689: INFO: stderr: ""
Feb 16 13:49:20.689: INFO: stdout: "update-demo-nautilus-9jtln update-demo-nautilus-zf2px "
Feb 16 13:49:20.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9jtln -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2762'
Feb 16 13:49:20.785: INFO: stderr: ""
Feb 16 13:49:20.785: INFO: stdout: ""
Feb 16 13:49:20.785: INFO: update-demo-nautilus-9jtln is created but not running
Feb 16 13:49:25.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2762'
Feb 16 13:49:25.974: INFO: stderr: ""
Feb 16 13:49:25.974: INFO: stdout: "update-demo-nautilus-9jtln update-demo-nautilus-zf2px "
Feb 16 13:49:25.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9jtln -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2762'
Feb 16 13:49:26.082: INFO: stderr: ""
Feb 16 13:49:26.082: INFO: stdout: "true"
Feb 16 13:49:26.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9jtln -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2762'
Feb 16 13:49:26.167: INFO: stderr: ""
Feb 16 13:49:26.167: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 16 13:49:26.167: INFO: validating pod update-demo-nautilus-9jtln
Feb 16 13:49:26.179: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 16 13:49:26.179: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 16 13:49:26.179: INFO: update-demo-nautilus-9jtln is verified up and running
Feb 16 13:49:26.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zf2px -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2762'
Feb 16 13:49:26.252: INFO: stderr: ""
Feb 16 13:49:26.252: INFO: stdout: "true"
Feb 16 13:49:26.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zf2px -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2762'
Feb 16 13:49:26.341: INFO: stderr: ""
Feb 16 13:49:26.341: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 16 13:49:26.341: INFO: validating pod update-demo-nautilus-zf2px
Feb 16 13:49:26.363: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 16 13:49:26.363: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 16 13:49:26.363: INFO: update-demo-nautilus-zf2px is verified up and running
STEP: using delete to clean up resources
Feb 16 13:49:26.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2762'
Feb 16 13:49:26.468: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 16 13:49:26.468: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 16 13:49:26.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2762'
Feb 16 13:49:26.655: INFO: stderr: "No resources found.\n"
Feb 16 13:49:26.655: INFO: stdout: ""
Feb 16 13:49:26.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2762 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 16 13:49:26.748: INFO: stderr: ""
Feb 16 13:49:26.748: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:49:26.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2762" for this suite.
Feb 16 13:49:48.832: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:49:48.964: INFO: namespace kubectl-2762 deletion completed in 22.173191053s

• [SLOW TEST:34.246 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:49:48.964: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-62547ca8-9eb8-4c6f-bf85-d29481061b9c
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-62547ca8-9eb8-4c6f-bf85-d29481061b9c
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:49:59.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6700" for this suite.
Feb 16 13:50:21.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:50:21.415: INFO: namespace projected-6700 deletion completed in 22.178709853s

• [SLOW TEST:32.451 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:50:21.415: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Feb 16 13:50:21.576: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-5484,SelfLink:/api/v1/namespaces/watch-5484/configmaps/e2e-watch-test-resource-version,UID:bffbfa45-3405-40da-955e-d9d01b0b10d5,ResourceVersion:24577449,Generation:0,CreationTimestamp:2020-02-16 13:50:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 16 13:50:21.576: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-5484,SelfLink:/api/v1/namespaces/watch-5484/configmaps/e2e-watch-test-resource-version,UID:bffbfa45-3405-40da-955e-d9d01b0b10d5,ResourceVersion:24577450,Generation:0,CreationTimestamp:2020-02-16 13:50:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:50:21.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5484" for this suite.
Feb 16 13:50:27.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:50:27.752: INFO: namespace watch-5484 deletion completed in 6.170285334s

• [SLOW TEST:6.337 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:50:27.753: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 16 13:50:27.871: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:50:36.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2879" for this suite.
Feb 16 13:51:30.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:51:30.711: INFO: namespace pods-2879 deletion completed in 54.410768481s

• [SLOW TEST:62.958 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:51:30.711: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0216 13:51:40.888877       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 16 13:51:40.888: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:51:40.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3881" for this suite.
Feb 16 13:51:46.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:51:47.035: INFO: namespace gc-3881 deletion completed in 6.14288919s

• [SLOW TEST:16.324 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:51:47.035: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 16 13:52:05.175: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 16 13:52:05.184: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 16 13:52:07.185: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 16 13:52:07.195: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 16 13:52:09.185: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 16 13:52:09.195: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 16 13:52:11.185: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 16 13:52:11.198: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 16 13:52:13.185: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 16 13:52:13.199: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 16 13:52:15.185: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 16 13:52:15.198: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 16 13:52:17.185: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 16 13:52:17.196: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 16 13:52:19.185: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 16 13:52:19.191: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 16 13:52:21.185: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 16 13:52:21.241: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 16 13:52:23.187: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 16 13:52:23.201: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 16 13:52:25.185: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 16 13:52:25.203: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 16 13:52:27.185: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 16 13:52:27.199: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 16 13:52:29.185: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 16 13:52:29.193: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 16 13:52:31.185: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 16 13:52:31.191: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 16 13:52:33.185: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 16 13:52:33.195: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 16 13:52:35.185: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 16 13:52:35.203: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 16 13:52:37.185: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 16 13:52:37.194: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:52:37.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-655" for this suite.
Feb 16 13:53:01.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:53:01.415: INFO: namespace container-lifecycle-hook-655 deletion completed in 24.144980075s

• [SLOW TEST:74.380 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:53:01.415: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 16 13:53:12.124: INFO: Successfully updated pod "pod-update-activedeadlineseconds-646a1971-6ed8-41b5-ae83-fed4f694b747"
Feb 16 13:53:12.124: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-646a1971-6ed8-41b5-ae83-fed4f694b747" in namespace "pods-1413" to be "terminated due to deadline exceeded"
Feb 16 13:53:12.133: INFO: Pod "pod-update-activedeadlineseconds-646a1971-6ed8-41b5-ae83-fed4f694b747": Phase="Running", Reason="", readiness=true. Elapsed: 9.378091ms
Feb 16 13:53:14.237: INFO: Pod "pod-update-activedeadlineseconds-646a1971-6ed8-41b5-ae83-fed4f694b747": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.113430097s
Feb 16 13:53:14.238: INFO: Pod "pod-update-activedeadlineseconds-646a1971-6ed8-41b5-ae83-fed4f694b747" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:53:14.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1413" for this suite.
Feb 16 13:53:20.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:53:20.463: INFO: namespace pods-1413 deletion completed in 6.215823238s

• [SLOW TEST:19.048 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:53:20.464: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 16 13:53:20.578: INFO: Waiting up to 5m0s for pod "downwardapi-volume-66d62727-7941-4e4a-8209-eefb833dafc9" in namespace "projected-4126" to be "success or failure"
Feb 16 13:53:20.602: INFO: Pod "downwardapi-volume-66d62727-7941-4e4a-8209-eefb833dafc9": Phase="Pending", Reason="", readiness=false. Elapsed: 23.515221ms
Feb 16 13:53:22.622: INFO: Pod "downwardapi-volume-66d62727-7941-4e4a-8209-eefb833dafc9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043977634s
Feb 16 13:53:24.637: INFO: Pod "downwardapi-volume-66d62727-7941-4e4a-8209-eefb833dafc9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058417423s
Feb 16 13:53:26.646: INFO: Pod "downwardapi-volume-66d62727-7941-4e4a-8209-eefb833dafc9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067444355s
Feb 16 13:53:28.652: INFO: Pod "downwardapi-volume-66d62727-7941-4e4a-8209-eefb833dafc9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.073300031s
STEP: Saw pod success
Feb 16 13:53:28.652: INFO: Pod "downwardapi-volume-66d62727-7941-4e4a-8209-eefb833dafc9" satisfied condition "success or failure"
Feb 16 13:53:28.655: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-66d62727-7941-4e4a-8209-eefb833dafc9 container client-container: 
STEP: delete the pod
Feb 16 13:53:28.717: INFO: Waiting for pod downwardapi-volume-66d62727-7941-4e4a-8209-eefb833dafc9 to disappear
Feb 16 13:53:28.847: INFO: Pod downwardapi-volume-66d62727-7941-4e4a-8209-eefb833dafc9 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:53:28.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4126" for this suite.
Feb 16 13:53:34.885: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:53:34.996: INFO: namespace projected-4126 deletion completed in 6.136972952s

• [SLOW TEST:14.531 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:53:34.997: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:53:43.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5477" for this suite.
Feb 16 13:54:29.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:54:29.361: INFO: namespace kubelet-test-5477 deletion completed in 46.140129838s

• [SLOW TEST:54.364 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:54:29.362: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 16 13:54:29.491: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9e86fb76-323c-4645-8703-ad96e78219ac" in namespace "downward-api-2941" to be "success or failure"
Feb 16 13:54:29.507: INFO: Pod "downwardapi-volume-9e86fb76-323c-4645-8703-ad96e78219ac": Phase="Pending", Reason="", readiness=false. Elapsed: 15.567216ms
Feb 16 13:54:31.515: INFO: Pod "downwardapi-volume-9e86fb76-323c-4645-8703-ad96e78219ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023785163s
Feb 16 13:54:33.524: INFO: Pod "downwardapi-volume-9e86fb76-323c-4645-8703-ad96e78219ac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032318719s
Feb 16 13:54:35.535: INFO: Pod "downwardapi-volume-9e86fb76-323c-4645-8703-ad96e78219ac": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043349955s
Feb 16 13:54:37.546: INFO: Pod "downwardapi-volume-9e86fb76-323c-4645-8703-ad96e78219ac": Phase="Pending", Reason="", readiness=false. Elapsed: 8.054370751s
Feb 16 13:54:39.560: INFO: Pod "downwardapi-volume-9e86fb76-323c-4645-8703-ad96e78219ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068574768s
STEP: Saw pod success
Feb 16 13:54:39.560: INFO: Pod "downwardapi-volume-9e86fb76-323c-4645-8703-ad96e78219ac" satisfied condition "success or failure"
Feb 16 13:54:39.564: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-9e86fb76-323c-4645-8703-ad96e78219ac container client-container: 
STEP: delete the pod
Feb 16 13:54:39.763: INFO: Waiting for pod downwardapi-volume-9e86fb76-323c-4645-8703-ad96e78219ac to disappear
Feb 16 13:54:39.795: INFO: Pod downwardapi-volume-9e86fb76-323c-4645-8703-ad96e78219ac no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:54:39.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2941" for this suite.
Feb 16 13:54:45.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:54:46.037: INFO: namespace downward-api-2941 deletion completed in 6.2199743s

• [SLOW TEST:16.676 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:54:46.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-1169/configmap-test-56fb3a7e-a8f9-4b1d-bcba-30317c40e65b
STEP: Creating a pod to test consume configMaps
Feb 16 13:54:46.170: INFO: Waiting up to 5m0s for pod "pod-configmaps-64e548af-945e-47aa-89f3-e59809b248de" in namespace "configmap-1169" to be "success or failure"
Feb 16 13:54:46.179: INFO: Pod "pod-configmaps-64e548af-945e-47aa-89f3-e59809b248de": Phase="Pending", Reason="", readiness=false. Elapsed: 8.275009ms
Feb 16 13:54:48.186: INFO: Pod "pod-configmaps-64e548af-945e-47aa-89f3-e59809b248de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015469503s
Feb 16 13:54:50.197: INFO: Pod "pod-configmaps-64e548af-945e-47aa-89f3-e59809b248de": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026794857s
Feb 16 13:54:52.207: INFO: Pod "pod-configmaps-64e548af-945e-47aa-89f3-e59809b248de": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03660168s
Feb 16 13:54:54.215: INFO: Pod "pod-configmaps-64e548af-945e-47aa-89f3-e59809b248de": Phase="Pending", Reason="", readiness=false. Elapsed: 8.044658262s
Feb 16 13:54:56.236: INFO: Pod "pod-configmaps-64e548af-945e-47aa-89f3-e59809b248de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.065511584s
STEP: Saw pod success
Feb 16 13:54:56.236: INFO: Pod "pod-configmaps-64e548af-945e-47aa-89f3-e59809b248de" satisfied condition "success or failure"
Feb 16 13:54:56.241: INFO: Trying to get logs from node iruya-node pod pod-configmaps-64e548af-945e-47aa-89f3-e59809b248de container env-test: 
STEP: delete the pod
Feb 16 13:54:56.323: INFO: Waiting for pod pod-configmaps-64e548af-945e-47aa-89f3-e59809b248de to disappear
Feb 16 13:54:56.328: INFO: Pod pod-configmaps-64e548af-945e-47aa-89f3-e59809b248de no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:54:56.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1169" for this suite.
Feb 16 13:55:02.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:55:02.513: INFO: namespace configmap-1169 deletion completed in 6.181543475s

• [SLOW TEST:16.475 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:55:02.513: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-3af7db60-f62b-4748-9af1-d92bf8035641
STEP: Creating a pod to test consume configMaps
Feb 16 13:55:02.683: INFO: Waiting up to 5m0s for pod "pod-configmaps-c3a3b43b-a73d-41cc-8b76-77855cc5bb47" in namespace "configmap-8038" to be "success or failure"
Feb 16 13:55:02.718: INFO: Pod "pod-configmaps-c3a3b43b-a73d-41cc-8b76-77855cc5bb47": Phase="Pending", Reason="", readiness=false. Elapsed: 34.749817ms
Feb 16 13:55:04.733: INFO: Pod "pod-configmaps-c3a3b43b-a73d-41cc-8b76-77855cc5bb47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049405894s
Feb 16 13:55:06.745: INFO: Pod "pod-configmaps-c3a3b43b-a73d-41cc-8b76-77855cc5bb47": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061930208s
Feb 16 13:55:08.756: INFO: Pod "pod-configmaps-c3a3b43b-a73d-41cc-8b76-77855cc5bb47": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073139512s
Feb 16 13:55:10.763: INFO: Pod "pod-configmaps-c3a3b43b-a73d-41cc-8b76-77855cc5bb47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.080093127s
STEP: Saw pod success
Feb 16 13:55:10.763: INFO: Pod "pod-configmaps-c3a3b43b-a73d-41cc-8b76-77855cc5bb47" satisfied condition "success or failure"
Feb 16 13:55:10.767: INFO: Trying to get logs from node iruya-node pod pod-configmaps-c3a3b43b-a73d-41cc-8b76-77855cc5bb47 container configmap-volume-test: 
STEP: delete the pod
Feb 16 13:55:10.936: INFO: Waiting for pod pod-configmaps-c3a3b43b-a73d-41cc-8b76-77855cc5bb47 to disappear
Feb 16 13:55:10.955: INFO: Pod pod-configmaps-c3a3b43b-a73d-41cc-8b76-77855cc5bb47 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:55:10.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8038" for this suite.
Feb 16 13:55:16.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:55:17.121: INFO: namespace configmap-8038 deletion completed in 6.16135513s

• [SLOW TEST:14.608 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:55:17.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Feb 16 13:55:25.274: INFO: Pod pod-hostip-6e18015f-bb34-40cd-998b-e26e0c36aa59 has hostIP: 10.96.3.65
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:55:25.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9914" for this suite.
Feb 16 13:55:47.306: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:55:47.440: INFO: namespace pods-9914 deletion completed in 22.158661061s

• [SLOW TEST:30.318 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:55:47.441: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-0a1fce72-3aa0-40e7-983b-de3ce358dbf5
STEP: Creating secret with name secret-projected-all-test-volume-e9a3d197-ab80-47f3-bcc7-5d788e076e6e
STEP: Creating a pod to test Check all projections for projected volume plugin
Feb 16 13:55:47.597: INFO: Waiting up to 5m0s for pod "projected-volume-17c32f38-8269-47be-8533-7efd98fd1e14" in namespace "projected-9814" to be "success or failure"
Feb 16 13:55:47.603: INFO: Pod "projected-volume-17c32f38-8269-47be-8533-7efd98fd1e14": Phase="Pending", Reason="", readiness=false. Elapsed: 5.749511ms
Feb 16 13:55:49.615: INFO: Pod "projected-volume-17c32f38-8269-47be-8533-7efd98fd1e14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017984543s
Feb 16 13:55:51.625: INFO: Pod "projected-volume-17c32f38-8269-47be-8533-7efd98fd1e14": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028235947s
Feb 16 13:55:53.652: INFO: Pod "projected-volume-17c32f38-8269-47be-8533-7efd98fd1e14": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054841543s
Feb 16 13:55:55.661: INFO: Pod "projected-volume-17c32f38-8269-47be-8533-7efd98fd1e14": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06357465s
Feb 16 13:55:57.667: INFO: Pod "projected-volume-17c32f38-8269-47be-8533-7efd98fd1e14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.06988799s
STEP: Saw pod success
Feb 16 13:55:57.667: INFO: Pod "projected-volume-17c32f38-8269-47be-8533-7efd98fd1e14" satisfied condition "success or failure"
Feb 16 13:55:57.671: INFO: Trying to get logs from node iruya-node pod projected-volume-17c32f38-8269-47be-8533-7efd98fd1e14 container projected-all-volume-test: 
STEP: delete the pod
Feb 16 13:55:57.734: INFO: Waiting for pod projected-volume-17c32f38-8269-47be-8533-7efd98fd1e14 to disappear
Feb 16 13:55:57.757: INFO: Pod projected-volume-17c32f38-8269-47be-8533-7efd98fd1e14 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:55:57.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9814" for this suite.
Feb 16 13:56:03.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:56:03.970: INFO: namespace projected-9814 deletion completed in 6.206930475s

• [SLOW TEST:16.529 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:56:03.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb 16 13:56:04.093: INFO: Waiting up to 5m0s for pod "pod-43b9ee1b-ebc9-49b1-b65f-6b48212cbf33" in namespace "emptydir-9410" to be "success or failure"
Feb 16 13:56:04.133: INFO: Pod "pod-43b9ee1b-ebc9-49b1-b65f-6b48212cbf33": Phase="Pending", Reason="", readiness=false. Elapsed: 39.260137ms
Feb 16 13:56:06.153: INFO: Pod "pod-43b9ee1b-ebc9-49b1-b65f-6b48212cbf33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059586416s
Feb 16 13:56:08.162: INFO: Pod "pod-43b9ee1b-ebc9-49b1-b65f-6b48212cbf33": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069135929s
Feb 16 13:56:10.179: INFO: Pod "pod-43b9ee1b-ebc9-49b1-b65f-6b48212cbf33": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086015912s
Feb 16 13:56:12.190: INFO: Pod "pod-43b9ee1b-ebc9-49b1-b65f-6b48212cbf33": Phase="Pending", Reason="", readiness=false. Elapsed: 8.096743307s
Feb 16 13:56:14.198: INFO: Pod "pod-43b9ee1b-ebc9-49b1-b65f-6b48212cbf33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.104513069s
STEP: Saw pod success
Feb 16 13:56:14.198: INFO: Pod "pod-43b9ee1b-ebc9-49b1-b65f-6b48212cbf33" satisfied condition "success or failure"
Feb 16 13:56:14.202: INFO: Trying to get logs from node iruya-node pod pod-43b9ee1b-ebc9-49b1-b65f-6b48212cbf33 container test-container: 
STEP: delete the pod
Feb 16 13:56:14.271: INFO: Waiting for pod pod-43b9ee1b-ebc9-49b1-b65f-6b48212cbf33 to disappear
Feb 16 13:56:14.280: INFO: Pod pod-43b9ee1b-ebc9-49b1-b65f-6b48212cbf33 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:56:14.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9410" for this suite.
Feb 16 13:56:20.381: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:56:20.529: INFO: namespace emptydir-9410 deletion completed in 6.237996518s

• [SLOW TEST:16.559 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:56:20.530: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 16 13:56:20.655: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7b484c3f-8adb-472b-9d4b-4cd4ead5a61b" in namespace "downward-api-3067" to be "success or failure"
Feb 16 13:56:20.667: INFO: Pod "downwardapi-volume-7b484c3f-8adb-472b-9d4b-4cd4ead5a61b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.196878ms
Feb 16 13:56:22.679: INFO: Pod "downwardapi-volume-7b484c3f-8adb-472b-9d4b-4cd4ead5a61b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023884561s
Feb 16 13:56:24.685: INFO: Pod "downwardapi-volume-7b484c3f-8adb-472b-9d4b-4cd4ead5a61b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02956548s
Feb 16 13:56:26.696: INFO: Pod "downwardapi-volume-7b484c3f-8adb-472b-9d4b-4cd4ead5a61b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040787915s
Feb 16 13:56:28.710: INFO: Pod "downwardapi-volume-7b484c3f-8adb-472b-9d4b-4cd4ead5a61b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.054384873s
STEP: Saw pod success
Feb 16 13:56:28.710: INFO: Pod "downwardapi-volume-7b484c3f-8adb-472b-9d4b-4cd4ead5a61b" satisfied condition "success or failure"
Feb 16 13:56:28.720: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-7b484c3f-8adb-472b-9d4b-4cd4ead5a61b container client-container: 
STEP: delete the pod
Feb 16 13:56:28.977: INFO: Waiting for pod downwardapi-volume-7b484c3f-8adb-472b-9d4b-4cd4ead5a61b to disappear
Feb 16 13:56:29.008: INFO: Pod downwardapi-volume-7b484c3f-8adb-472b-9d4b-4cd4ead5a61b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:56:29.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3067" for this suite.
Feb 16 13:56:35.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:56:35.153: INFO: namespace downward-api-3067 deletion completed in 6.13506619s

• [SLOW TEST:14.624 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:56:35.154: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:56:35.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-386" for this suite.
Feb 16 13:56:41.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:56:41.515: INFO: namespace kubelet-test-386 deletion completed in 6.137915507s

• [SLOW TEST:6.361 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:56:41.515: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-19401e04-0c52-43e5-8c3e-734b758921e2
STEP: Creating a pod to test consume secrets
Feb 16 13:56:41.772: INFO: Waiting up to 5m0s for pod "pod-secrets-cffbf181-9863-46e9-8bea-7c5a56848a7b" in namespace "secrets-488" to be "success or failure"
Feb 16 13:56:41.812: INFO: Pod "pod-secrets-cffbf181-9863-46e9-8bea-7c5a56848a7b": Phase="Pending", Reason="", readiness=false. Elapsed: 40.097955ms
Feb 16 13:56:43.830: INFO: Pod "pod-secrets-cffbf181-9863-46e9-8bea-7c5a56848a7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057941441s
Feb 16 13:56:45.844: INFO: Pod "pod-secrets-cffbf181-9863-46e9-8bea-7c5a56848a7b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071737266s
Feb 16 13:56:47.853: INFO: Pod "pod-secrets-cffbf181-9863-46e9-8bea-7c5a56848a7b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081411507s
Feb 16 13:56:49.866: INFO: Pod "pod-secrets-cffbf181-9863-46e9-8bea-7c5a56848a7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.093518621s
STEP: Saw pod success
Feb 16 13:56:49.866: INFO: Pod "pod-secrets-cffbf181-9863-46e9-8bea-7c5a56848a7b" satisfied condition "success or failure"
Feb 16 13:56:49.872: INFO: Trying to get logs from node iruya-node pod pod-secrets-cffbf181-9863-46e9-8bea-7c5a56848a7b container secret-volume-test: 
STEP: delete the pod
Feb 16 13:56:49.958: INFO: Waiting for pod pod-secrets-cffbf181-9863-46e9-8bea-7c5a56848a7b to disappear
Feb 16 13:56:50.246: INFO: Pod pod-secrets-cffbf181-9863-46e9-8bea-7c5a56848a7b no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:56:50.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-488" for this suite.
Feb 16 13:56:56.333: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:56:56.475: INFO: namespace secrets-488 deletion completed in 6.211800989s

• [SLOW TEST:14.960 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:56:56.476: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Feb 16 13:56:56.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5976'
Feb 16 13:56:58.780: INFO: stderr: ""
Feb 16 13:56:58.780: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 16 13:56:58.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5976'
Feb 16 13:56:58.920: INFO: stderr: ""
Feb 16 13:56:58.920: INFO: stdout: "update-demo-nautilus-6p24f update-demo-nautilus-pslvj "
Feb 16 13:56:58.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6p24f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5976'
Feb 16 13:56:59.038: INFO: stderr: ""
Feb 16 13:56:59.038: INFO: stdout: ""
Feb 16 13:56:59.038: INFO: update-demo-nautilus-6p24f is created but not running
Feb 16 13:57:04.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5976'
Feb 16 13:57:05.361: INFO: stderr: ""
Feb 16 13:57:05.362: INFO: stdout: "update-demo-nautilus-6p24f update-demo-nautilus-pslvj "
Feb 16 13:57:05.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6p24f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5976'
Feb 16 13:57:06.088: INFO: stderr: ""
Feb 16 13:57:06.088: INFO: stdout: ""
Feb 16 13:57:06.088: INFO: update-demo-nautilus-6p24f is created but not running
Feb 16 13:57:11.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5976'
Feb 16 13:57:11.241: INFO: stderr: ""
Feb 16 13:57:11.241: INFO: stdout: "update-demo-nautilus-6p24f update-demo-nautilus-pslvj "
Feb 16 13:57:11.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6p24f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5976'
Feb 16 13:57:11.371: INFO: stderr: ""
Feb 16 13:57:11.371: INFO: stdout: "true"
Feb 16 13:57:11.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6p24f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5976'
Feb 16 13:57:11.469: INFO: stderr: ""
Feb 16 13:57:11.469: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 16 13:57:11.469: INFO: validating pod update-demo-nautilus-6p24f
Feb 16 13:57:11.486: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 16 13:57:11.486: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 16 13:57:11.486: INFO: update-demo-nautilus-6p24f is verified up and running
Feb 16 13:57:11.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pslvj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5976'
Feb 16 13:57:11.583: INFO: stderr: ""
Feb 16 13:57:11.583: INFO: stdout: "true"
Feb 16 13:57:11.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pslvj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5976'
Feb 16 13:57:11.692: INFO: stderr: ""
Feb 16 13:57:11.692: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 16 13:57:11.692: INFO: validating pod update-demo-nautilus-pslvj
Feb 16 13:57:11.699: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 16 13:57:11.699: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 16 13:57:11.699: INFO: update-demo-nautilus-pslvj is verified up and running
STEP: rolling-update to new replication controller
Feb 16 13:57:11.703: INFO: scanned /root for discovery docs: 
Feb 16 13:57:11.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-5976'
Feb 16 13:57:40.838: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb 16 13:57:40.839: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 16 13:57:40.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5976'
Feb 16 13:57:41.014: INFO: stderr: ""
Feb 16 13:57:41.014: INFO: stdout: "update-demo-kitten-dbhw9 update-demo-kitten-fxs52 "
Feb 16 13:57:41.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-dbhw9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5976'
Feb 16 13:57:41.130: INFO: stderr: ""
Feb 16 13:57:41.130: INFO: stdout: "true"
Feb 16 13:57:41.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-dbhw9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5976'
Feb 16 13:57:41.213: INFO: stderr: ""
Feb 16 13:57:41.213: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb 16 13:57:41.213: INFO: validating pod update-demo-kitten-dbhw9
Feb 16 13:57:41.243: INFO: got data: {
  "image": "kitten.jpg"
}

Feb 16 13:57:41.244: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb 16 13:57:41.244: INFO: update-demo-kitten-dbhw9 is verified up and running
Feb 16 13:57:41.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-fxs52 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5976'
Feb 16 13:57:41.320: INFO: stderr: ""
Feb 16 13:57:41.320: INFO: stdout: "true"
Feb 16 13:57:41.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-fxs52 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5976'
Feb 16 13:57:41.419: INFO: stderr: ""
Feb 16 13:57:41.419: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb 16 13:57:41.419: INFO: validating pod update-demo-kitten-fxs52
Feb 16 13:57:41.453: INFO: got data: {
  "image": "kitten.jpg"
}

Feb 16 13:57:41.453: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb 16 13:57:41.453: INFO: update-demo-kitten-fxs52 is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:57:41.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5976" for this suite.
Feb 16 13:58:07.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:58:07.611: INFO: namespace kubectl-5976 deletion completed in 26.13786403s

• [SLOW TEST:71.136 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:58:07.612: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 16 13:58:07.766: INFO: Pod name rollover-pod: Found 0 pods out of 1
Feb 16 13:58:12.779: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 16 13:58:16.796: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Feb 16 13:58:18.835: INFO: Creating deployment "test-rollover-deployment"
Feb 16 13:58:18.858: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Feb 16 13:58:20.880: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Feb 16 13:58:20.892: INFO: Ensure that both replica sets have 1 created replica
Feb 16 13:58:20.912: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Feb 16 13:58:20.930: INFO: Updating deployment test-rollover-deployment
Feb 16 13:58:20.930: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Feb 16 13:58:23.297: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Feb 16 13:58:23.305: INFO: Make sure deployment "test-rollover-deployment" is complete
Feb 16 13:58:23.311: INFO: all replica sets need to contain the pod-template-hash label
Feb 16 13:58:23.311: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717458299, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717458299, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717458302, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717458298, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 16 13:58:25.324: INFO: all replica sets need to contain the pod-template-hash label
Feb 16 13:58:25.324: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717458299, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717458299, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717458302, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717458298, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 16 13:58:27.325: INFO: all replica sets need to contain the pod-template-hash label
Feb 16 13:58:27.325: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717458299, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717458299, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717458302, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717458298, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 16 13:58:29.325: INFO: all replica sets need to contain the pod-template-hash label
Feb 16 13:58:29.325: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717458299, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717458299, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717458309, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717458298, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 16 13:58:31.326: INFO: all replica sets need to contain the pod-template-hash label
Feb 16 13:58:31.327: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717458299, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717458299, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717458309, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717458298, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 16 13:58:33.325: INFO: all replica sets need to contain the pod-template-hash label
Feb 16 13:58:33.326: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717458299, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717458299, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717458309, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717458298, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 16 13:58:35.324: INFO: all replica sets need to contain the pod-template-hash label
Feb 16 13:58:35.324: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717458299, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717458299, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717458309, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717458298, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 16 13:58:37.325: INFO: all replica sets need to contain the pod-template-hash label
Feb 16 13:58:37.325: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717458299, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717458299, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717458309, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717458298, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 16 13:58:39.327: INFO: 
Feb 16 13:58:39.327: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb 16 13:58:39.338: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-4473,SelfLink:/apis/apps/v1/namespaces/deployment-4473/deployments/test-rollover-deployment,UID:c8319baf-c21c-4505-91e6-95e7cb53c46d,ResourceVersion:24578701,Generation:2,CreationTimestamp:2020-02-16 13:58:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-16 13:58:19 +0000 UTC 2020-02-16 13:58:19 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-16 13:58:39 +0000 UTC 2020-02-16 13:58:18 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb 16 13:58:39.374: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-4473,SelfLink:/apis/apps/v1/namespaces/deployment-4473/replicasets/test-rollover-deployment-854595fc44,UID:d46e5c6e-1744-47c3-9b7a-6275ce55aeaa,ResourceVersion:24578691,Generation:2,CreationTimestamp:2020-02-16 13:58:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment c8319baf-c21c-4505-91e6-95e7cb53c46d 0xc002d77637 0xc002d77638}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb 16 13:58:39.374: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Feb 16 13:58:39.374: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-4473,SelfLink:/apis/apps/v1/namespaces/deployment-4473/replicasets/test-rollover-controller,UID:af2b34b7-0982-4f1c-afae-9ebc994cd41f,ResourceVersion:24578700,Generation:2,CreationTimestamp:2020-02-16 13:58:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment c8319baf-c21c-4505-91e6-95e7cb53c46d 0xc002d77567 0xc002d77568}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 16 13:58:39.374: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-4473,SelfLink:/apis/apps/v1/namespaces/deployment-4473/replicasets/test-rollover-deployment-9b8b997cf,UID:8ef78a09-28f2-4cc8-aefb-5f92cb11ee54,ResourceVersion:24578658,Generation:2,CreationTimestamp:2020-02-16 13:58:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment c8319baf-c21c-4505-91e6-95e7cb53c46d 0xc002d77700 0xc002d77701}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 16 13:58:39.380: INFO: Pod "test-rollover-deployment-854595fc44-h6zp2" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-h6zp2,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-4473,SelfLink:/api/v1/namespaces/deployment-4473/pods/test-rollover-deployment-854595fc44-h6zp2,UID:72e595e9-d2ec-403e-8061-2cc25f4d10f2,ResourceVersion:24578675,Generation:0,CreationTimestamp:2020-02-16 13:58:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 d46e5c6e-1744-47c3-9b7a-6275ce55aeaa 0xc002a37fa7 0xc002a37fa8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dzgd6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dzgd6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-dzgd6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0029bc010} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0029bc030}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:58:21 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:58:28 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:58:28 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:58:21 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-02-16 13:58:21 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-16 13:58:28 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://7c4b348df192aa6381b3a02acf8f13b6fbe8b418070e466b7df8a57f452c65a5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:58:39.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4473" for this suite.
Feb 16 13:58:47.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:58:47.866: INFO: namespace deployment-4473 deletion completed in 8.478635333s

• [SLOW TEST:40.255 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:58:47.867: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 16 13:58:56.091: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:58:56.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5933" for this suite.
Feb 16 13:59:02.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:59:02.306: INFO: namespace container-runtime-5933 deletion completed in 6.146960848s

• [SLOW TEST:14.440 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:59:02.307: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Feb 16 13:59:02.404: INFO: Waiting up to 5m0s for pod "pod-63f6abe3-7e97-409a-867c-31d3cf79bb4d" in namespace "emptydir-6849" to be "success or failure"
Feb 16 13:59:02.433: INFO: Pod "pod-63f6abe3-7e97-409a-867c-31d3cf79bb4d": Phase="Pending", Reason="", readiness=false. Elapsed: 28.181639ms
Feb 16 13:59:04.442: INFO: Pod "pod-63f6abe3-7e97-409a-867c-31d3cf79bb4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037326416s
Feb 16 13:59:06.452: INFO: Pod "pod-63f6abe3-7e97-409a-867c-31d3cf79bb4d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047514991s
Feb 16 13:59:08.463: INFO: Pod "pod-63f6abe3-7e97-409a-867c-31d3cf79bb4d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05769736s
Feb 16 13:59:10.475: INFO: Pod "pod-63f6abe3-7e97-409a-867c-31d3cf79bb4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.069690402s
STEP: Saw pod success
Feb 16 13:59:10.475: INFO: Pod "pod-63f6abe3-7e97-409a-867c-31d3cf79bb4d" satisfied condition "success or failure"
Feb 16 13:59:10.480: INFO: Trying to get logs from node iruya-node pod pod-63f6abe3-7e97-409a-867c-31d3cf79bb4d container test-container: 
STEP: delete the pod
Feb 16 13:59:10.617: INFO: Waiting for pod pod-63f6abe3-7e97-409a-867c-31d3cf79bb4d to disappear
Feb 16 13:59:10.625: INFO: Pod pod-63f6abe3-7e97-409a-867c-31d3cf79bb4d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:59:10.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6849" for this suite.
Feb 16 13:59:16.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:59:16.854: INFO: namespace emptydir-6849 deletion completed in 6.221617375s

• [SLOW TEST:14.548 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:59:16.855: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-10c9766c-af43-46da-b477-2ef741a6d1c5
STEP: Creating a pod to test consume secrets
Feb 16 13:59:17.023: INFO: Waiting up to 5m0s for pod "pod-secrets-3baa88d7-c7b5-4845-8b52-88e1ada8bebf" in namespace "secrets-9065" to be "success or failure"
Feb 16 13:59:17.201: INFO: Pod "pod-secrets-3baa88d7-c7b5-4845-8b52-88e1ada8bebf": Phase="Pending", Reason="", readiness=false. Elapsed: 178.036306ms
Feb 16 13:59:19.444: INFO: Pod "pod-secrets-3baa88d7-c7b5-4845-8b52-88e1ada8bebf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.421193774s
Feb 16 13:59:21.454: INFO: Pod "pod-secrets-3baa88d7-c7b5-4845-8b52-88e1ada8bebf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.430631057s
Feb 16 13:59:23.461: INFO: Pod "pod-secrets-3baa88d7-c7b5-4845-8b52-88e1ada8bebf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.438505488s
Feb 16 13:59:25.477: INFO: Pod "pod-secrets-3baa88d7-c7b5-4845-8b52-88e1ada8bebf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.454001928s
Feb 16 13:59:27.486: INFO: Pod "pod-secrets-3baa88d7-c7b5-4845-8b52-88e1ada8bebf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.462824293s
STEP: Saw pod success
Feb 16 13:59:27.486: INFO: Pod "pod-secrets-3baa88d7-c7b5-4845-8b52-88e1ada8bebf" satisfied condition "success or failure"
Feb 16 13:59:27.491: INFO: Trying to get logs from node iruya-node pod pod-secrets-3baa88d7-c7b5-4845-8b52-88e1ada8bebf container secret-volume-test: 
STEP: delete the pod
Feb 16 13:59:27.594: INFO: Waiting for pod pod-secrets-3baa88d7-c7b5-4845-8b52-88e1ada8bebf to disappear
Feb 16 13:59:27.604: INFO: Pod pod-secrets-3baa88d7-c7b5-4845-8b52-88e1ada8bebf no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:59:27.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9065" for this suite.
Feb 16 13:59:33.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:59:33.999: INFO: namespace secrets-9065 deletion completed in 6.244124166s

• [SLOW TEST:17.144 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:59:33.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-cf779ce0-15e6-4751-92b0-f6bd2ed3d88e
STEP: Creating a pod to test consume configMaps
Feb 16 13:59:34.086: INFO: Waiting up to 5m0s for pod "pod-configmaps-7a774b1d-a948-451b-b635-ecc90228c46d" in namespace "configmap-2466" to be "success or failure"
Feb 16 13:59:34.102: INFO: Pod "pod-configmaps-7a774b1d-a948-451b-b635-ecc90228c46d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.849139ms
Feb 16 13:59:36.109: INFO: Pod "pod-configmaps-7a774b1d-a948-451b-b635-ecc90228c46d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022725756s
Feb 16 13:59:38.123: INFO: Pod "pod-configmaps-7a774b1d-a948-451b-b635-ecc90228c46d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037114127s
Feb 16 13:59:40.139: INFO: Pod "pod-configmaps-7a774b1d-a948-451b-b635-ecc90228c46d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053081414s
Feb 16 13:59:42.150: INFO: Pod "pod-configmaps-7a774b1d-a948-451b-b635-ecc90228c46d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.063830148s
STEP: Saw pod success
Feb 16 13:59:42.150: INFO: Pod "pod-configmaps-7a774b1d-a948-451b-b635-ecc90228c46d" satisfied condition "success or failure"
Feb 16 13:59:42.153: INFO: Trying to get logs from node iruya-node pod pod-configmaps-7a774b1d-a948-451b-b635-ecc90228c46d container configmap-volume-test: 
STEP: delete the pod
Feb 16 13:59:42.436: INFO: Waiting for pod pod-configmaps-7a774b1d-a948-451b-b635-ecc90228c46d to disappear
Feb 16 13:59:42.442: INFO: Pod pod-configmaps-7a774b1d-a948-451b-b635-ecc90228c46d no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 13:59:42.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2466" for this suite.
Feb 16 13:59:48.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:59:48.610: INFO: namespace configmap-2466 deletion completed in 6.164880037s

• [SLOW TEST:14.611 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 13:59:48.611: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb 16 14:00:04.995: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 16 14:00:05.003: INFO: Pod pod-with-poststart-http-hook still exists
Feb 16 14:00:07.003: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 16 14:00:07.011: INFO: Pod pod-with-poststart-http-hook still exists
Feb 16 14:00:09.003: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 16 14:00:09.010: INFO: Pod pod-with-poststart-http-hook still exists
Feb 16 14:00:11.003: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 16 14:00:11.011: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:00:11.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9006" for this suite.
Feb 16 14:00:33.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:00:33.165: INFO: namespace container-lifecycle-hook-9006 deletion completed in 22.146017498s

• [SLOW TEST:44.555 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:00:33.166: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-c9d60dba-c86b-4dbc-a943-f51d0bb0d830 in namespace container-probe-8994
Feb 16 14:00:41.263: INFO: Started pod test-webserver-c9d60dba-c86b-4dbc-a943-f51d0bb0d830 in namespace container-probe-8994
STEP: checking the pod's current state and verifying that restartCount is present
Feb 16 14:00:41.268: INFO: Initial restart count of pod test-webserver-c9d60dba-c86b-4dbc-a943-f51d0bb0d830 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:04:42.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8994" for this suite.
Feb 16 14:04:48.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:04:48.978: INFO: namespace container-probe-8994 deletion completed in 6.215742582s

• [SLOW TEST:255.813 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:04:48.979: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-2601
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2601 to expose endpoints map[]
Feb 16 14:04:49.192: INFO: Get endpoints failed (8.874204ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Feb 16 14:04:50.965: INFO: successfully validated that service multi-endpoint-test in namespace services-2601 exposes endpoints map[] (1.782525926s elapsed)
STEP: Creating pod pod1 in namespace services-2601
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2601 to expose endpoints map[pod1:[100]]
Feb 16 14:04:55.179: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.151798724s elapsed, will retry)
Feb 16 14:04:58.226: INFO: successfully validated that service multi-endpoint-test in namespace services-2601 exposes endpoints map[pod1:[100]] (7.198902805s elapsed)
STEP: Creating pod pod2 in namespace services-2601
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2601 to expose endpoints map[pod1:[100] pod2:[101]]
Feb 16 14:05:02.852: INFO: Unexpected endpoints: found map[fa97fb22-4eed-441e-b6f7-0359833a1720:[100]], expected map[pod1:[100] pod2:[101]] (4.621319963s elapsed, will retry)
Feb 16 14:05:04.931: INFO: successfully validated that service multi-endpoint-test in namespace services-2601 exposes endpoints map[pod1:[100] pod2:[101]] (6.699964166s elapsed)
STEP: Deleting pod pod1 in namespace services-2601
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2601 to expose endpoints map[pod2:[101]]
Feb 16 14:05:05.978: INFO: successfully validated that service multi-endpoint-test in namespace services-2601 exposes endpoints map[pod2:[101]] (1.04047291s elapsed)
STEP: Deleting pod pod2 in namespace services-2601
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2601 to expose endpoints map[]
Feb 16 14:05:07.020: INFO: successfully validated that service multi-endpoint-test in namespace services-2601 exposes endpoints map[] (1.020064937s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:05:08.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2601" for this suite.
Feb 16 14:05:30.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:05:30.927: INFO: namespace services-2601 deletion completed in 22.272110054s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:41.948 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:05:30.927: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Feb 16 14:05:31.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8121'
Feb 16 14:05:31.350: INFO: stderr: ""
Feb 16 14:05:31.350: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 16 14:05:31.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8121'
Feb 16 14:05:31.484: INFO: stderr: ""
Feb 16 14:05:31.484: INFO: stdout: "update-demo-nautilus-hbkhf update-demo-nautilus-w9fhv "
Feb 16 14:05:31.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hbkhf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8121'
Feb 16 14:05:31.567: INFO: stderr: ""
Feb 16 14:05:31.567: INFO: stdout: ""
Feb 16 14:05:31.567: INFO: update-demo-nautilus-hbkhf is created but not running
Feb 16 14:05:36.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8121'
Feb 16 14:05:37.171: INFO: stderr: ""
Feb 16 14:05:37.171: INFO: stdout: "update-demo-nautilus-hbkhf update-demo-nautilus-w9fhv "
Feb 16 14:05:37.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hbkhf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8121'
Feb 16 14:05:38.048: INFO: stderr: ""
Feb 16 14:05:38.048: INFO: stdout: ""
Feb 16 14:05:38.048: INFO: update-demo-nautilus-hbkhf is created but not running
Feb 16 14:05:43.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8121'
Feb 16 14:05:43.195: INFO: stderr: ""
Feb 16 14:05:43.195: INFO: stdout: "update-demo-nautilus-hbkhf update-demo-nautilus-w9fhv "
Feb 16 14:05:43.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hbkhf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8121'
Feb 16 14:05:43.325: INFO: stderr: ""
Feb 16 14:05:43.326: INFO: stdout: "true"
Feb 16 14:05:43.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hbkhf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8121'
Feb 16 14:05:43.406: INFO: stderr: ""
Feb 16 14:05:43.406: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 16 14:05:43.406: INFO: validating pod update-demo-nautilus-hbkhf
Feb 16 14:05:43.421: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 16 14:05:43.421: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 16 14:05:43.421: INFO: update-demo-nautilus-hbkhf is verified up and running
Feb 16 14:05:43.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w9fhv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8121'
Feb 16 14:05:43.505: INFO: stderr: ""
Feb 16 14:05:43.505: INFO: stdout: "true"
Feb 16 14:05:43.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w9fhv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8121'
Feb 16 14:05:43.587: INFO: stderr: ""
Feb 16 14:05:43.587: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 16 14:05:43.587: INFO: validating pod update-demo-nautilus-w9fhv
Feb 16 14:05:43.599: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 16 14:05:43.600: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 16 14:05:43.600: INFO: update-demo-nautilus-w9fhv is verified up and running
STEP: scaling down the replication controller
Feb 16 14:05:43.604: INFO: scanned /root for discovery docs: 
Feb 16 14:05:43.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-8121'
Feb 16 14:05:44.724: INFO: stderr: ""
Feb 16 14:05:44.724: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 16 14:05:44.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8121'
Feb 16 14:05:44.840: INFO: stderr: ""
Feb 16 14:05:44.840: INFO: stdout: "update-demo-nautilus-hbkhf update-demo-nautilus-w9fhv "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 16 14:05:49.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8121'
Feb 16 14:05:49.981: INFO: stderr: ""
Feb 16 14:05:49.981: INFO: stdout: "update-demo-nautilus-hbkhf "
Feb 16 14:05:49.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hbkhf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8121'
Feb 16 14:05:50.093: INFO: stderr: ""
Feb 16 14:05:50.093: INFO: stdout: "true"
Feb 16 14:05:50.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hbkhf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8121'
Feb 16 14:05:50.193: INFO: stderr: ""
Feb 16 14:05:50.193: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 16 14:05:50.193: INFO: validating pod update-demo-nautilus-hbkhf
Feb 16 14:05:50.206: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 16 14:05:50.206: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 16 14:05:50.206: INFO: update-demo-nautilus-hbkhf is verified up and running
STEP: scaling up the replication controller
Feb 16 14:05:50.210: INFO: scanned /root for discovery docs: 
Feb 16 14:05:50.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-8121'
Feb 16 14:05:51.408: INFO: stderr: ""
Feb 16 14:05:51.408: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 16 14:05:51.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8121'
Feb 16 14:05:51.499: INFO: stderr: ""
Feb 16 14:05:51.500: INFO: stdout: "update-demo-nautilus-96vwz update-demo-nautilus-hbkhf "
Feb 16 14:05:51.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-96vwz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8121'
Feb 16 14:05:51.607: INFO: stderr: ""
Feb 16 14:05:51.607: INFO: stdout: ""
Feb 16 14:05:51.607: INFO: update-demo-nautilus-96vwz is created but not running
Feb 16 14:05:56.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8121'
Feb 16 14:05:56.766: INFO: stderr: ""
Feb 16 14:05:56.766: INFO: stdout: "update-demo-nautilus-96vwz update-demo-nautilus-hbkhf "
Feb 16 14:05:56.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-96vwz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8121'
Feb 16 14:05:56.862: INFO: stderr: ""
Feb 16 14:05:56.862: INFO: stdout: ""
Feb 16 14:05:56.863: INFO: update-demo-nautilus-96vwz is created but not running
Feb 16 14:06:01.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8121'
Feb 16 14:06:01.988: INFO: stderr: ""
Feb 16 14:06:01.988: INFO: stdout: "update-demo-nautilus-96vwz update-demo-nautilus-hbkhf "
Feb 16 14:06:01.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-96vwz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8121'
Feb 16 14:06:02.081: INFO: stderr: ""
Feb 16 14:06:02.082: INFO: stdout: ""
Feb 16 14:06:02.082: INFO: update-demo-nautilus-96vwz is created but not running
Feb 16 14:06:07.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8121'
Feb 16 14:06:07.218: INFO: stderr: ""
Feb 16 14:06:07.218: INFO: stdout: "update-demo-nautilus-96vwz update-demo-nautilus-hbkhf "
Feb 16 14:06:07.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-96vwz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8121'
Feb 16 14:06:07.327: INFO: stderr: ""
Feb 16 14:06:07.327: INFO: stdout: "true"
Feb 16 14:06:07.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-96vwz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8121'
Feb 16 14:06:07.411: INFO: stderr: ""
Feb 16 14:06:07.411: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 16 14:06:07.411: INFO: validating pod update-demo-nautilus-96vwz
Feb 16 14:06:07.419: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 16 14:06:07.419: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 16 14:06:07.419: INFO: update-demo-nautilus-96vwz is verified up and running
Feb 16 14:06:07.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hbkhf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8121'
Feb 16 14:06:07.506: INFO: stderr: ""
Feb 16 14:06:07.506: INFO: stdout: "true"
Feb 16 14:06:07.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hbkhf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8121'
Feb 16 14:06:07.581: INFO: stderr: ""
Feb 16 14:06:07.581: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 16 14:06:07.581: INFO: validating pod update-demo-nautilus-hbkhf
Feb 16 14:06:07.599: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 16 14:06:07.599: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 16 14:06:07.599: INFO: update-demo-nautilus-hbkhf is verified up and running
STEP: using delete to clean up resources
Feb 16 14:06:07.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8121'
Feb 16 14:06:07.688: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 16 14:06:07.688: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 16 14:06:07.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8121'
Feb 16 14:06:07.869: INFO: stderr: "No resources found.\n"
Feb 16 14:06:07.869: INFO: stdout: ""
Feb 16 14:06:07.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8121 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 16 14:06:08.083: INFO: stderr: ""
Feb 16 14:06:08.084: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:06:08.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8121" for this suite.
Feb 16 14:06:30.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:06:30.204: INFO: namespace kubectl-8121 deletion completed in 22.112804657s

• [SLOW TEST:59.277 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:06:30.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Feb 16 14:06:30.431: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3769,SelfLink:/api/v1/namespaces/watch-3769/configmaps/e2e-watch-test-watch-closed,UID:274a748c-0e24-4844-ad66-9a3a6acb7d9c,ResourceVersion:24579656,Generation:0,CreationTimestamp:2020-02-16 14:06:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 16 14:06:30.431: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3769,SelfLink:/api/v1/namespaces/watch-3769/configmaps/e2e-watch-test-watch-closed,UID:274a748c-0e24-4844-ad66-9a3a6acb7d9c,ResourceVersion:24579657,Generation:0,CreationTimestamp:2020-02-16 14:06:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Feb 16 14:06:30.455: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3769,SelfLink:/api/v1/namespaces/watch-3769/configmaps/e2e-watch-test-watch-closed,UID:274a748c-0e24-4844-ad66-9a3a6acb7d9c,ResourceVersion:24579658,Generation:0,CreationTimestamp:2020-02-16 14:06:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 16 14:06:30.456: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3769,SelfLink:/api/v1/namespaces/watch-3769/configmaps/e2e-watch-test-watch-closed,UID:274a748c-0e24-4844-ad66-9a3a6acb7d9c,ResourceVersion:24579659,Generation:0,CreationTimestamp:2020-02-16 14:06:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:06:30.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3769" for this suite.
Feb 16 14:06:36.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:06:36.741: INFO: namespace watch-3769 deletion completed in 6.263392558s

• [SLOW TEST:6.537 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:06:36.742: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb 16 14:06:47.504: INFO: Successfully updated pod "labelsupdate70ab3c01-896e-4fb7-8856-480b3f6d00e8"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:06:49.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-690" for this suite.
Feb 16 14:07:11.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:07:11.846: INFO: namespace projected-690 deletion completed in 22.226635537s

• [SLOW TEST:35.104 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:07:11.847: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0216 14:07:54.070019       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 16 14:07:54.070: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:07:54.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1050" for this suite.
Feb 16 14:08:14.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:08:14.190: INFO: namespace gc-1050 deletion completed in 20.11702536s

• [SLOW TEST:62.343 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:08:14.191: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:08:45.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-4189" for this suite.
Feb 16 14:08:51.708: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:08:51.870: INFO: namespace namespaces-4189 deletion completed in 6.187152117s
STEP: Destroying namespace "nsdeletetest-4914" for this suite.
Feb 16 14:08:51.878: INFO: Namespace nsdeletetest-4914 was already deleted
STEP: Destroying namespace "nsdeletetest-4757" for this suite.
Feb 16 14:08:57.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:08:58.037: INFO: namespace nsdeletetest-4757 deletion completed in 6.158799173s

• [SLOW TEST:43.846 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:08:58.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Feb 16 14:09:06.694: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4269 pod-service-account-47983e81-333c-4f31-bf3b-9b3111393082 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Feb 16 14:09:09.447: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4269 pod-service-account-47983e81-333c-4f31-bf3b-9b3111393082 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Feb 16 14:09:10.147: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4269 pod-service-account-47983e81-333c-4f31-bf3b-9b3111393082 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:09:10.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-4269" for this suite.
Feb 16 14:09:16.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:09:16.779: INFO: namespace svcaccounts-4269 deletion completed in 6.161662116s

• [SLOW TEST:18.741 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:09:16.780: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb 16 14:09:27.474: INFO: Successfully updated pod "annotationupdate1c94608b-7c75-4935-942e-397ea3c36f64"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:09:29.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6773" for this suite.
Feb 16 14:09:51.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:09:51.845: INFO: namespace downward-api-6773 deletion completed in 22.236118915s

• [SLOW TEST:35.065 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:09:51.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 16 14:09:52.024: INFO: Waiting up to 5m0s for pod "downwardapi-volume-eda7fb16-9d3d-44bf-b545-935d30bea001" in namespace "downward-api-2677" to be "success or failure"
Feb 16 14:09:52.029: INFO: Pod "downwardapi-volume-eda7fb16-9d3d-44bf-b545-935d30bea001": Phase="Pending", Reason="", readiness=false. Elapsed: 4.436847ms
Feb 16 14:09:54.048: INFO: Pod "downwardapi-volume-eda7fb16-9d3d-44bf-b545-935d30bea001": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023579689s
Feb 16 14:09:56.057: INFO: Pod "downwardapi-volume-eda7fb16-9d3d-44bf-b545-935d30bea001": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032603797s
Feb 16 14:09:58.069: INFO: Pod "downwardapi-volume-eda7fb16-9d3d-44bf-b545-935d30bea001": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044675446s
Feb 16 14:10:00.076: INFO: Pod "downwardapi-volume-eda7fb16-9d3d-44bf-b545-935d30bea001": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05138819s
STEP: Saw pod success
Feb 16 14:10:00.076: INFO: Pod "downwardapi-volume-eda7fb16-9d3d-44bf-b545-935d30bea001" satisfied condition "success or failure"
Feb 16 14:10:00.079: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-eda7fb16-9d3d-44bf-b545-935d30bea001 container client-container: 
STEP: delete the pod
Feb 16 14:10:00.167: INFO: Waiting for pod downwardapi-volume-eda7fb16-9d3d-44bf-b545-935d30bea001 to disappear
Feb 16 14:10:00.175: INFO: Pod downwardapi-volume-eda7fb16-9d3d-44bf-b545-935d30bea001 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:10:00.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2677" for this suite.
Feb 16 14:10:06.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:10:06.362: INFO: namespace downward-api-2677 deletion completed in 6.180018032s

• [SLOW TEST:14.517 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:10:06.362: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:10:11.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2262" for this suite.
Feb 16 14:10:17.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:10:18.098: INFO: namespace watch-2262 deletion completed in 6.202877757s

• [SLOW TEST:11.736 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:10:18.098: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 16 14:10:26.766: INFO: Successfully updated pod "pod-update-b8d6c3a8-3fb7-40e5-909d-5076722d01ed"
STEP: verifying the updated pod is in kubernetes
Feb 16 14:10:26.817: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:10:26.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5403" for this suite.
Feb 16 14:10:48.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:10:48.990: INFO: namespace pods-5403 deletion completed in 22.163730919s

• [SLOW TEST:30.892 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:10:48.990: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-4019/secret-test-b7adc2ab-473c-4b15-a9a7-9552a65dce50
STEP: Creating a pod to test consume secrets
Feb 16 14:10:49.153: INFO: Waiting up to 5m0s for pod "pod-configmaps-6752c81f-186e-45a8-be0c-db945342ec24" in namespace "secrets-4019" to be "success or failure"
Feb 16 14:10:49.159: INFO: Pod "pod-configmaps-6752c81f-186e-45a8-be0c-db945342ec24": Phase="Pending", Reason="", readiness=false. Elapsed: 5.752041ms
Feb 16 14:10:51.167: INFO: Pod "pod-configmaps-6752c81f-186e-45a8-be0c-db945342ec24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013207252s
Feb 16 14:10:53.174: INFO: Pod "pod-configmaps-6752c81f-186e-45a8-be0c-db945342ec24": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021047634s
Feb 16 14:10:55.181: INFO: Pod "pod-configmaps-6752c81f-186e-45a8-be0c-db945342ec24": Phase="Pending", Reason="", readiness=false. Elapsed: 6.02772024s
Feb 16 14:10:57.193: INFO: Pod "pod-configmaps-6752c81f-186e-45a8-be0c-db945342ec24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.040045283s
STEP: Saw pod success
Feb 16 14:10:57.193: INFO: Pod "pod-configmaps-6752c81f-186e-45a8-be0c-db945342ec24" satisfied condition "success or failure"
Feb 16 14:10:57.197: INFO: Trying to get logs from node iruya-node pod pod-configmaps-6752c81f-186e-45a8-be0c-db945342ec24 container env-test: 
STEP: delete the pod
Feb 16 14:10:57.266: INFO: Waiting for pod pod-configmaps-6752c81f-186e-45a8-be0c-db945342ec24 to disappear
Feb 16 14:10:57.301: INFO: Pod pod-configmaps-6752c81f-186e-45a8-be0c-db945342ec24 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:10:57.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4019" for this suite.
Feb 16 14:11:03.337: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:11:03.448: INFO: namespace secrets-4019 deletion completed in 6.139342874s

• [SLOW TEST:14.457 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:11:03.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-d1e67a8c-242d-43d2-8c27-f6d3672a65ac
STEP: Creating a pod to test consume secrets
Feb 16 14:11:03.590: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a3fc312c-1c43-460b-b9a2-47e94a0412b0" in namespace "projected-8411" to be "success or failure"
Feb 16 14:11:03.605: INFO: Pod "pod-projected-secrets-a3fc312c-1c43-460b-b9a2-47e94a0412b0": Phase="Pending", Reason="", readiness=false. Elapsed: 15.215383ms
Feb 16 14:11:05.616: INFO: Pod "pod-projected-secrets-a3fc312c-1c43-460b-b9a2-47e94a0412b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025995984s
Feb 16 14:11:07.624: INFO: Pod "pod-projected-secrets-a3fc312c-1c43-460b-b9a2-47e94a0412b0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033252124s
Feb 16 14:11:09.639: INFO: Pod "pod-projected-secrets-a3fc312c-1c43-460b-b9a2-47e94a0412b0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048699923s
Feb 16 14:11:11.648: INFO: Pod "pod-projected-secrets-a3fc312c-1c43-460b-b9a2-47e94a0412b0": Phase="Running", Reason="", readiness=true. Elapsed: 8.058000664s
Feb 16 14:11:13.662: INFO: Pod "pod-projected-secrets-a3fc312c-1c43-460b-b9a2-47e94a0412b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.071503464s
STEP: Saw pod success
Feb 16 14:11:13.662: INFO: Pod "pod-projected-secrets-a3fc312c-1c43-460b-b9a2-47e94a0412b0" satisfied condition "success or failure"
Feb 16 14:11:13.667: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-a3fc312c-1c43-460b-b9a2-47e94a0412b0 container projected-secret-volume-test: 
STEP: delete the pod
Feb 16 14:11:13.738: INFO: Waiting for pod pod-projected-secrets-a3fc312c-1c43-460b-b9a2-47e94a0412b0 to disappear
Feb 16 14:11:13.743: INFO: Pod pod-projected-secrets-a3fc312c-1c43-460b-b9a2-47e94a0412b0 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:11:13.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8411" for this suite.
Feb 16 14:11:19.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:11:19.963: INFO: namespace projected-8411 deletion completed in 6.213132103s

• [SLOW TEST:16.514 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:11:19.963: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-a095d963-6751-4c6b-8097-b19d21183bb2
STEP: Creating a pod to test consume configMaps
Feb 16 14:11:20.050: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1ec07242-065c-4a8a-ad2b-ef8a1692b3e9" in namespace "projected-690" to be "success or failure"
Feb 16 14:11:20.071: INFO: Pod "pod-projected-configmaps-1ec07242-065c-4a8a-ad2b-ef8a1692b3e9": Phase="Pending", Reason="", readiness=false. Elapsed: 21.619343ms
Feb 16 14:11:22.079: INFO: Pod "pod-projected-configmaps-1ec07242-065c-4a8a-ad2b-ef8a1692b3e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029423826s
Feb 16 14:11:24.093: INFO: Pod "pod-projected-configmaps-1ec07242-065c-4a8a-ad2b-ef8a1692b3e9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043086833s
Feb 16 14:11:26.100: INFO: Pod "pod-projected-configmaps-1ec07242-065c-4a8a-ad2b-ef8a1692b3e9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050581181s
Feb 16 14:11:28.116: INFO: Pod "pod-projected-configmaps-1ec07242-065c-4a8a-ad2b-ef8a1692b3e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.066753535s
STEP: Saw pod success
Feb 16 14:11:28.117: INFO: Pod "pod-projected-configmaps-1ec07242-065c-4a8a-ad2b-ef8a1692b3e9" satisfied condition "success or failure"
Feb 16 14:11:28.123: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-1ec07242-065c-4a8a-ad2b-ef8a1692b3e9 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 16 14:11:28.188: INFO: Waiting for pod pod-projected-configmaps-1ec07242-065c-4a8a-ad2b-ef8a1692b3e9 to disappear
Feb 16 14:11:28.193: INFO: Pod pod-projected-configmaps-1ec07242-065c-4a8a-ad2b-ef8a1692b3e9 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:11:28.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-690" for this suite.
Feb 16 14:11:34.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:11:34.358: INFO: namespace projected-690 deletion completed in 6.159152516s

• [SLOW TEST:14.395 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:11:34.358: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 16 14:11:34.413: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:11:42.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5555" for this suite.
Feb 16 14:12:29.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:12:29.830: INFO: namespace pods-5555 deletion completed in 47.202052114s

• [SLOW TEST:55.471 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:12:29.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-9948
I0216 14:12:30.037092       8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-9948, replica count: 1
I0216 14:12:31.087882       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0216 14:12:32.088527       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0216 14:12:33.088990       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0216 14:12:34.089499       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0216 14:12:35.089944       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0216 14:12:36.090378       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0216 14:12:37.090868       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0216 14:12:38.091419       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 16 14:12:38.249: INFO: Created: latency-svc-9fpf9
Feb 16 14:12:38.374: INFO: Got endpoints: latency-svc-9fpf9 [182.31414ms]
Feb 16 14:12:38.433: INFO: Created: latency-svc-gsmsl
Feb 16 14:12:38.451: INFO: Got endpoints: latency-svc-gsmsl [76.501933ms]
Feb 16 14:12:38.541: INFO: Created: latency-svc-p6w5b
Feb 16 14:12:38.586: INFO: Got endpoints: latency-svc-p6w5b [212.11355ms]
Feb 16 14:12:38.590: INFO: Created: latency-svc-x44m9
Feb 16 14:12:38.614: INFO: Got endpoints: latency-svc-x44m9 [238.317323ms]
Feb 16 14:12:38.749: INFO: Created: latency-svc-wmnbm
Feb 16 14:12:38.819: INFO: Got endpoints: latency-svc-wmnbm [443.40684ms]
Feb 16 14:12:38.826: INFO: Created: latency-svc-5qn7f
Feb 16 14:12:38.884: INFO: Got endpoints: latency-svc-5qn7f [509.026205ms]
Feb 16 14:12:38.924: INFO: Created: latency-svc-l6l7d
Feb 16 14:12:38.934: INFO: Got endpoints: latency-svc-l6l7d [558.977046ms]
Feb 16 14:12:38.975: INFO: Created: latency-svc-mj7nv
Feb 16 14:12:39.070: INFO: Got endpoints: latency-svc-mj7nv [694.779809ms]
Feb 16 14:12:39.117: INFO: Created: latency-svc-5g8ts
Feb 16 14:12:39.122: INFO: Got endpoints: latency-svc-5g8ts [746.339299ms]
Feb 16 14:12:39.156: INFO: Created: latency-svc-qgbkf
Feb 16 14:12:39.369: INFO: Got endpoints: latency-svc-qgbkf [993.98487ms]
Feb 16 14:12:39.865: INFO: Created: latency-svc-zr9sz
Feb 16 14:12:39.882: INFO: Created: latency-svc-hpddr
Feb 16 14:12:39.900: INFO: Got endpoints: latency-svc-hpddr [1.525355868s]
Feb 16 14:12:39.988: INFO: Got endpoints: latency-svc-zr9sz [1.613309525s]
Feb 16 14:12:40.035: INFO: Created: latency-svc-b6csf
Feb 16 14:12:40.055: INFO: Got endpoints: latency-svc-b6csf [1.679706156s]
Feb 16 14:12:40.157: INFO: Created: latency-svc-mlmc6
Feb 16 14:12:40.158: INFO: Got endpoints: latency-svc-mlmc6 [1.782562083s]
Feb 16 14:12:40.235: INFO: Created: latency-svc-5qsjg
Feb 16 14:12:40.347: INFO: Got endpoints: latency-svc-5qsjg [1.971707551s]
Feb 16 14:12:40.386: INFO: Created: latency-svc-8h4xc
Feb 16 14:12:40.391: INFO: Got endpoints: latency-svc-8h4xc [2.015820473s]
Feb 16 14:12:40.443: INFO: Created: latency-svc-cpm7s
Feb 16 14:12:40.545: INFO: Got endpoints: latency-svc-cpm7s [2.094505423s]
Feb 16 14:12:40.557: INFO: Created: latency-svc-8q2cs
Feb 16 14:12:40.569: INFO: Got endpoints: latency-svc-8q2cs [1.981929267s]
Feb 16 14:12:40.614: INFO: Created: latency-svc-5pwtj
Feb 16 14:12:40.621: INFO: Got endpoints: latency-svc-5pwtj [2.00738657s]
Feb 16 14:12:40.735: INFO: Created: latency-svc-ltvfw
Feb 16 14:12:40.763: INFO: Got endpoints: latency-svc-ltvfw [1.944604163s]
Feb 16 14:12:40.898: INFO: Created: latency-svc-lzptf
Feb 16 14:12:40.926: INFO: Got endpoints: latency-svc-lzptf [2.040764792s]
Feb 16 14:12:40.977: INFO: Created: latency-svc-c564g
Feb 16 14:12:40.994: INFO: Got endpoints: latency-svc-c564g [2.059666573s]
Feb 16 14:12:41.199: INFO: Created: latency-svc-8pcps
Feb 16 14:12:41.249: INFO: Got endpoints: latency-svc-8pcps [2.178496425s]
Feb 16 14:12:41.388: INFO: Created: latency-svc-pq62c
Feb 16 14:12:41.401: INFO: Got endpoints: latency-svc-pq62c [406.918174ms]
Feb 16 14:12:41.457: INFO: Created: latency-svc-8l9kh
Feb 16 14:12:41.472: INFO: Got endpoints: latency-svc-8l9kh [2.349877408s]
Feb 16 14:12:41.651: INFO: Created: latency-svc-lg6l2
Feb 16 14:12:41.662: INFO: Got endpoints: latency-svc-lg6l2 [2.29228116s]
Feb 16 14:12:41.831: INFO: Created: latency-svc-5tm96
Feb 16 14:12:42.062: INFO: Got endpoints: latency-svc-5tm96 [2.161489197s]
Feb 16 14:12:42.068: INFO: Created: latency-svc-2jr2f
Feb 16 14:12:42.072: INFO: Got endpoints: latency-svc-2jr2f [2.083757046s]
Feb 16 14:12:42.147: INFO: Created: latency-svc-qkgcq
Feb 16 14:12:42.310: INFO: Got endpoints: latency-svc-qkgcq [2.255405983s]
Feb 16 14:12:42.351: INFO: Created: latency-svc-pps2z
Feb 16 14:12:42.392: INFO: Got endpoints: latency-svc-pps2z [2.233974001s]
Feb 16 14:12:42.565: INFO: Created: latency-svc-shjz7
Feb 16 14:12:42.647: INFO: Got endpoints: latency-svc-shjz7 [2.300051634s]
Feb 16 14:12:42.812: INFO: Created: latency-svc-ltxf5
Feb 16 14:12:42.834: INFO: Got endpoints: latency-svc-ltxf5 [2.442764863s]
Feb 16 14:12:43.052: INFO: Created: latency-svc-w6hdt
Feb 16 14:12:43.232: INFO: Got endpoints: latency-svc-w6hdt [2.686226299s]
Feb 16 14:12:43.235: INFO: Created: latency-svc-27hzn
Feb 16 14:12:43.245: INFO: Got endpoints: latency-svc-27hzn [2.675661419s]
Feb 16 14:12:43.418: INFO: Created: latency-svc-fk9s5
Feb 16 14:12:43.449: INFO: Got endpoints: latency-svc-fk9s5 [2.827578648s]
Feb 16 14:12:43.516: INFO: Created: latency-svc-sf66j
Feb 16 14:12:43.588: INFO: Got endpoints: latency-svc-sf66j [2.823636398s]
Feb 16 14:12:43.662: INFO: Created: latency-svc-vrs56
Feb 16 14:12:43.677: INFO: Got endpoints: latency-svc-vrs56 [2.75100983s]
Feb 16 14:12:43.776: INFO: Created: latency-svc-rzfpr
Feb 16 14:12:43.792: INFO: Got endpoints: latency-svc-rzfpr [2.54330727s]
Feb 16 14:12:43.919: INFO: Created: latency-svc-fgv5s
Feb 16 14:12:43.942: INFO: Got endpoints: latency-svc-fgv5s [2.540578571s]
Feb 16 14:12:43.997: INFO: Created: latency-svc-jfxjp
Feb 16 14:12:44.096: INFO: Got endpoints: latency-svc-jfxjp [2.623856228s]
Feb 16 14:12:44.136: INFO: Created: latency-svc-dscbw
Feb 16 14:12:44.163: INFO: Got endpoints: latency-svc-dscbw [2.50114221s]
Feb 16 14:12:44.317: INFO: Created: latency-svc-qnwq7
Feb 16 14:12:44.327: INFO: Got endpoints: latency-svc-qnwq7 [2.265077316s]
Feb 16 14:12:44.509: INFO: Created: latency-svc-d6769
Feb 16 14:12:44.528: INFO: Got endpoints: latency-svc-d6769 [2.456181708s]
Feb 16 14:12:44.716: INFO: Created: latency-svc-hbcv4
Feb 16 14:12:44.732: INFO: Got endpoints: latency-svc-hbcv4 [2.421232883s]
Feb 16 14:12:44.793: INFO: Created: latency-svc-hdck8
Feb 16 14:12:44.799: INFO: Got endpoints: latency-svc-hdck8 [2.407325907s]
Feb 16 14:12:44.916: INFO: Created: latency-svc-zcwhl
Feb 16 14:12:44.925: INFO: Got endpoints: latency-svc-zcwhl [2.277652473s]
Feb 16 14:12:44.977: INFO: Created: latency-svc-f2hxn
Feb 16 14:12:44.984: INFO: Got endpoints: latency-svc-f2hxn [2.149745783s]
Feb 16 14:12:45.072: INFO: Created: latency-svc-kpfkf
Feb 16 14:12:45.079: INFO: Got endpoints: latency-svc-kpfkf [1.846474718s]
Feb 16 14:12:45.127: INFO: Created: latency-svc-tdnqw
Feb 16 14:12:45.145: INFO: Got endpoints: latency-svc-tdnqw [1.899680085s]
Feb 16 14:12:45.266: INFO: Created: latency-svc-88gcg
Feb 16 14:12:45.341: INFO: Got endpoints: latency-svc-88gcg [1.890730474s]
Feb 16 14:12:45.342: INFO: Created: latency-svc-rhjvf
Feb 16 14:12:45.417: INFO: Got endpoints: latency-svc-rhjvf [1.829129886s]
Feb 16 14:12:45.451: INFO: Created: latency-svc-mnkw5
Feb 16 14:12:45.453: INFO: Got endpoints: latency-svc-mnkw5 [1.776163944s]
Feb 16 14:12:45.491: INFO: Created: latency-svc-6nhhg
Feb 16 14:12:45.508: INFO: Got endpoints: latency-svc-6nhhg [1.715919081s]
Feb 16 14:12:45.627: INFO: Created: latency-svc-rg9zj
Feb 16 14:12:45.668: INFO: Got endpoints: latency-svc-rg9zj [1.72530181s]
Feb 16 14:12:45.671: INFO: Created: latency-svc-t55td
Feb 16 14:12:45.676: INFO: Got endpoints: latency-svc-t55td [1.579416692s]
Feb 16 14:12:45.763: INFO: Created: latency-svc-gqlp2
Feb 16 14:12:45.777: INFO: Got endpoints: latency-svc-gqlp2 [1.613459013s]
Feb 16 14:12:45.844: INFO: Created: latency-svc-qbzvl
Feb 16 14:12:45.932: INFO: Got endpoints: latency-svc-qbzvl [1.604790464s]
Feb 16 14:12:45.959: INFO: Created: latency-svc-87cht
Feb 16 14:12:45.970: INFO: Got endpoints: latency-svc-87cht [1.441731207s]
Feb 16 14:12:46.027: INFO: Created: latency-svc-tl4ff
Feb 16 14:12:46.248: INFO: Got endpoints: latency-svc-tl4ff [1.515419887s]
Feb 16 14:12:46.277: INFO: Created: latency-svc-zpt85
Feb 16 14:12:46.277: INFO: Got endpoints: latency-svc-zpt85 [1.477812021s]
Feb 16 14:12:46.370: INFO: Created: latency-svc-dbjcg
Feb 16 14:12:46.397: INFO: Got endpoints: latency-svc-dbjcg [1.470431215s]
Feb 16 14:12:46.440: INFO: Created: latency-svc-rfbpf
Feb 16 14:12:46.511: INFO: Got endpoints: latency-svc-rfbpf [1.527321249s]
Feb 16 14:12:46.604: INFO: Created: latency-svc-hsmd7
Feb 16 14:12:46.697: INFO: Got endpoints: latency-svc-hsmd7 [1.618174768s]
Feb 16 14:12:46.808: INFO: Created: latency-svc-zs7ks
Feb 16 14:12:46.961: INFO: Got endpoints: latency-svc-zs7ks [1.816184451s]
Feb 16 14:12:46.995: INFO: Created: latency-svc-k48l8
Feb 16 14:12:47.006: INFO: Got endpoints: latency-svc-k48l8 [1.664718278s]
Feb 16 14:12:47.051: INFO: Created: latency-svc-gb8fg
Feb 16 14:12:47.123: INFO: Got endpoints: latency-svc-gb8fg [1.706001749s]
Feb 16 14:12:47.153: INFO: Created: latency-svc-h6bvz
Feb 16 14:12:47.156: INFO: Got endpoints: latency-svc-h6bvz [1.703045421s]
Feb 16 14:12:47.200: INFO: Created: latency-svc-74qfk
Feb 16 14:12:47.205: INFO: Got endpoints: latency-svc-74qfk [1.696246827s]
Feb 16 14:12:47.339: INFO: Created: latency-svc-8n5z7
Feb 16 14:12:47.339: INFO: Got endpoints: latency-svc-8n5z7 [1.671268195s]
Feb 16 14:12:47.376: INFO: Created: latency-svc-hzd7r
Feb 16 14:12:47.376: INFO: Got endpoints: latency-svc-hzd7r [1.700069065s]
Feb 16 14:12:47.427: INFO: Created: latency-svc-kn9xn
Feb 16 14:12:47.486: INFO: Got endpoints: latency-svc-kn9xn [1.708667446s]
Feb 16 14:12:47.550: INFO: Created: latency-svc-qshv9
Feb 16 14:12:47.555: INFO: Got endpoints: latency-svc-qshv9 [1.622839541s]
Feb 16 14:12:47.677: INFO: Created: latency-svc-7lps4
Feb 16 14:12:47.684: INFO: Got endpoints: latency-svc-7lps4 [1.714264555s]
Feb 16 14:12:47.728: INFO: Created: latency-svc-tp9gc
Feb 16 14:12:47.735: INFO: Got endpoints: latency-svc-tp9gc [1.487028905s]
Feb 16 14:12:47.875: INFO: Created: latency-svc-h99q8
Feb 16 14:12:47.876: INFO: Got endpoints: latency-svc-h99q8 [1.598315686s]
Feb 16 14:12:47.941: INFO: Created: latency-svc-klc2v
Feb 16 14:12:48.021: INFO: Got endpoints: latency-svc-klc2v [1.623415656s]
Feb 16 14:12:48.072: INFO: Created: latency-svc-hbmnm
Feb 16 14:12:48.100: INFO: Got endpoints: latency-svc-hbmnm [1.588579858s]
Feb 16 14:12:48.108: INFO: Created: latency-svc-9d42n
Feb 16 14:12:48.117: INFO: Got endpoints: latency-svc-9d42n [1.419403776s]
Feb 16 14:12:48.254: INFO: Created: latency-svc-lr5s8
Feb 16 14:12:48.262: INFO: Got endpoints: latency-svc-lr5s8 [1.300885303s]
Feb 16 14:12:48.308: INFO: Created: latency-svc-fp8pp
Feb 16 14:12:48.381: INFO: Got endpoints: latency-svc-fp8pp [1.374745032s]
Feb 16 14:12:48.411: INFO: Created: latency-svc-zs2pr
Feb 16 14:12:48.419: INFO: Got endpoints: latency-svc-zs2pr [1.295906437s]
Feb 16 14:12:48.461: INFO: Created: latency-svc-58wbj
Feb 16 14:12:48.566: INFO: Created: latency-svc-wbb6s
Feb 16 14:12:48.567: INFO: Got endpoints: latency-svc-58wbj [1.410510445s]
Feb 16 14:12:48.637: INFO: Got endpoints: latency-svc-wbb6s [1.432342052s]
Feb 16 14:12:48.641: INFO: Created: latency-svc-49hw8
Feb 16 14:12:48.648: INFO: Got endpoints: latency-svc-49hw8 [1.308729573s]
Feb 16 14:12:48.781: INFO: Created: latency-svc-dz2jg
Feb 16 14:12:48.803: INFO: Got endpoints: latency-svc-dz2jg [1.42737678s]
Feb 16 14:12:48.934: INFO: Created: latency-svc-mnx2s
Feb 16 14:12:48.947: INFO: Got endpoints: latency-svc-mnx2s [1.461488185s]
Feb 16 14:12:49.152: INFO: Created: latency-svc-krfpj
Feb 16 14:12:49.161: INFO: Got endpoints: latency-svc-krfpj [1.606111139s]
Feb 16 14:12:49.215: INFO: Created: latency-svc-v8fgv
Feb 16 14:12:49.231: INFO: Got endpoints: latency-svc-v8fgv [1.545953798s]
Feb 16 14:12:49.343: INFO: Created: latency-svc-hq96d
Feb 16 14:12:49.354: INFO: Got endpoints: latency-svc-hq96d [1.618825131s]
Feb 16 14:12:49.413: INFO: Created: latency-svc-hft8c
Feb 16 14:12:49.415: INFO: Got endpoints: latency-svc-hft8c [1.539673651s]
Feb 16 14:12:49.547: INFO: Created: latency-svc-nzd2z
Feb 16 14:12:49.558: INFO: Got endpoints: latency-svc-nzd2z [1.536701709s]
Feb 16 14:12:49.608: INFO: Created: latency-svc-zhk6d
Feb 16 14:12:49.613: INFO: Got endpoints: latency-svc-zhk6d [1.512652592s]
Feb 16 14:12:49.756: INFO: Created: latency-svc-kjrlm
Feb 16 14:12:49.772: INFO: Got endpoints: latency-svc-kjrlm [1.655275228s]
Feb 16 14:12:49.857: INFO: Created: latency-svc-6mlbp
Feb 16 14:12:49.973: INFO: Got endpoints: latency-svc-6mlbp [1.710649597s]
Feb 16 14:12:49.981: INFO: Created: latency-svc-9n5m9
Feb 16 14:12:49.986: INFO: Got endpoints: latency-svc-9n5m9 [1.605357886s]
Feb 16 14:12:50.046: INFO: Created: latency-svc-z75q9
Feb 16 14:12:50.237: INFO: Got endpoints: latency-svc-z75q9 [1.817198689s]
Feb 16 14:12:50.244: INFO: Created: latency-svc-gd8lx
Feb 16 14:12:50.257: INFO: Got endpoints: latency-svc-gd8lx [1.689560361s]
Feb 16 14:12:50.453: INFO: Created: latency-svc-fkptn
Feb 16 14:12:50.453: INFO: Got endpoints: latency-svc-fkptn [1.815206783s]
Feb 16 14:12:50.530: INFO: Created: latency-svc-xt6qh
Feb 16 14:12:50.636: INFO: Got endpoints: latency-svc-xt6qh [1.987408901s]
Feb 16 14:12:50.651: INFO: Created: latency-svc-nfsnn
Feb 16 14:12:50.665: INFO: Got endpoints: latency-svc-nfsnn [1.86118002s]
Feb 16 14:12:50.723: INFO: Created: latency-svc-lmk8d
Feb 16 14:12:50.727: INFO: Got endpoints: latency-svc-lmk8d [1.779595952s]
Feb 16 14:12:50.914: INFO: Created: latency-svc-8rm5x
Feb 16 14:12:50.914: INFO: Got endpoints: latency-svc-8rm5x [1.752258284s]
Feb 16 14:12:50.988: INFO: Created: latency-svc-srbhh
Feb 16 14:12:51.065: INFO: Got endpoints: latency-svc-srbhh [1.834016192s]
Feb 16 14:12:51.142: INFO: Created: latency-svc-6gq5x
Feb 16 14:12:51.164: INFO: Got endpoints: latency-svc-6gq5x [1.809868329s]
Feb 16 14:12:51.396: INFO: Created: latency-svc-48gcg
Feb 16 14:12:51.414: INFO: Got endpoints: latency-svc-48gcg [1.998050129s]
Feb 16 14:12:51.470: INFO: Created: latency-svc-hphj8
Feb 16 14:12:51.481: INFO: Got endpoints: latency-svc-hphj8 [1.92286678s]
Feb 16 14:12:51.611: INFO: Created: latency-svc-hhg57
Feb 16 14:12:51.629: INFO: Got endpoints: latency-svc-hhg57 [2.015226336s]
Feb 16 14:12:51.756: INFO: Created: latency-svc-t7fjl
Feb 16 14:12:51.763: INFO: Got endpoints: latency-svc-t7fjl [1.990068023s]
Feb 16 14:12:51.841: INFO: Created: latency-svc-7bnfw
Feb 16 14:12:51.849: INFO: Got endpoints: latency-svc-7bnfw [1.875515845s]
Feb 16 14:12:51.995: INFO: Created: latency-svc-wj7d6
Feb 16 14:12:52.038: INFO: Got endpoints: latency-svc-wj7d6 [2.051912006s]
Feb 16 14:12:52.041: INFO: Created: latency-svc-bll7d
Feb 16 14:12:52.047: INFO: Got endpoints: latency-svc-bll7d [1.808687282s]
Feb 16 14:12:52.194: INFO: Created: latency-svc-ws4dq
Feb 16 14:12:52.212: INFO: Got endpoints: latency-svc-ws4dq [1.954455122s]
Feb 16 14:12:52.291: INFO: Created: latency-svc-4qthr
Feb 16 14:12:52.370: INFO: Got endpoints: latency-svc-4qthr [1.917200252s]
Feb 16 14:12:52.409: INFO: Created: latency-svc-k5bpd
Feb 16 14:12:52.413: INFO: Got endpoints: latency-svc-k5bpd [1.776842724s]
Feb 16 14:12:52.556: INFO: Created: latency-svc-5dt2d
Feb 16 14:12:53.392: INFO: Got endpoints: latency-svc-5dt2d [2.727096278s]
Feb 16 14:12:53.441: INFO: Created: latency-svc-8dd9g
Feb 16 14:12:53.444: INFO: Got endpoints: latency-svc-8dd9g [2.716399882s]
Feb 16 14:12:53.556: INFO: Created: latency-svc-xgkgr
Feb 16 14:12:53.565: INFO: Got endpoints: latency-svc-xgkgr [2.650604246s]
Feb 16 14:12:53.617: INFO: Created: latency-svc-sz6l8
Feb 16 14:12:53.630: INFO: Got endpoints: latency-svc-sz6l8 [2.564886545s]
Feb 16 14:12:53.758: INFO: Created: latency-svc-5hwfj
Feb 16 14:12:53.765: INFO: Got endpoints: latency-svc-5hwfj [2.599932044s]
Feb 16 14:12:53.939: INFO: Created: latency-svc-blszw
Feb 16 14:12:53.961: INFO: Got endpoints: latency-svc-blszw [2.54678042s]
Feb 16 14:12:54.015: INFO: Created: latency-svc-zfrj2
Feb 16 14:12:54.017: INFO: Got endpoints: latency-svc-zfrj2 [2.53584142s]
Feb 16 14:12:54.180: INFO: Created: latency-svc-bcrjx
Feb 16 14:12:54.180: INFO: Got endpoints: latency-svc-bcrjx [2.551534357s]
Feb 16 14:12:54.227: INFO: Created: latency-svc-jjgph
Feb 16 14:12:54.234: INFO: Got endpoints: latency-svc-jjgph [2.471294544s]
Feb 16 14:12:54.333: INFO: Created: latency-svc-5xj6l
Feb 16 14:12:54.338: INFO: Got endpoints: latency-svc-5xj6l [2.48912112s]
Feb 16 14:12:54.385: INFO: Created: latency-svc-658kh
Feb 16 14:12:54.392: INFO: Got endpoints: latency-svc-658kh [2.353005551s]
Feb 16 14:12:54.504: INFO: Created: latency-svc-slnrz
Feb 16 14:12:54.507: INFO: Got endpoints: latency-svc-slnrz [2.460177614s]
Feb 16 14:12:54.555: INFO: Created: latency-svc-ms6dc
Feb 16 14:12:54.685: INFO: Created: latency-svc-lqxfj
Feb 16 14:12:54.687: INFO: Got endpoints: latency-svc-ms6dc [2.47495209s]
Feb 16 14:12:54.693: INFO: Got endpoints: latency-svc-lqxfj [2.322124738s]
Feb 16 14:12:54.745: INFO: Created: latency-svc-rkhwg
Feb 16 14:12:54.750: INFO: Got endpoints: latency-svc-rkhwg [2.336881256s]
Feb 16 14:12:54.860: INFO: Created: latency-svc-fzrpd
Feb 16 14:12:54.899: INFO: Got endpoints: latency-svc-fzrpd [1.507145087s]
Feb 16 14:12:55.064: INFO: Created: latency-svc-tjdq7
Feb 16 14:12:55.086: INFO: Got endpoints: latency-svc-tjdq7 [1.642380155s]
Feb 16 14:12:55.244: INFO: Created: latency-svc-g45mg
Feb 16 14:12:55.253: INFO: Got endpoints: latency-svc-g45mg [1.688355378s]
Feb 16 14:12:55.299: INFO: Created: latency-svc-9jr8f
Feb 16 14:12:55.312: INFO: Got endpoints: latency-svc-9jr8f [1.682087722s]
Feb 16 14:12:55.525: INFO: Created: latency-svc-qkdbf
Feb 16 14:12:55.540: INFO: Got endpoints: latency-svc-qkdbf [1.774896861s]
Feb 16 14:12:55.584: INFO: Created: latency-svc-5wx9d
Feb 16 14:12:55.595: INFO: Got endpoints: latency-svc-5wx9d [1.634270099s]
Feb 16 14:12:55.700: INFO: Created: latency-svc-4z8ps
Feb 16 14:12:55.726: INFO: Got endpoints: latency-svc-4z8ps [1.70911852s]
Feb 16 14:12:55.760: INFO: Created: latency-svc-dwr7r
Feb 16 14:12:55.765: INFO: Got endpoints: latency-svc-dwr7r [1.584897334s]
Feb 16 14:12:55.899: INFO: Created: latency-svc-2pzjh
Feb 16 14:12:55.921: INFO: Got endpoints: latency-svc-2pzjh [1.687130781s]
Feb 16 14:12:55.970: INFO: Created: latency-svc-8mj6l
Feb 16 14:12:55.978: INFO: Got endpoints: latency-svc-8mj6l [1.639856901s]
Feb 16 14:12:56.066: INFO: Created: latency-svc-mlghf
Feb 16 14:12:56.098: INFO: Got endpoints: latency-svc-mlghf [1.706368492s]
Feb 16 14:12:56.135: INFO: Created: latency-svc-hkzs5
Feb 16 14:12:56.140: INFO: Got endpoints: latency-svc-hkzs5 [1.632883409s]
Feb 16 14:12:56.306: INFO: Created: latency-svc-c65j6
Feb 16 14:12:56.327: INFO: Got endpoints: latency-svc-c65j6 [1.640346656s]
Feb 16 14:12:56.393: INFO: Created: latency-svc-76z9b
Feb 16 14:12:56.403: INFO: Got endpoints: latency-svc-76z9b [1.709918295s]
Feb 16 14:12:56.496: INFO: Created: latency-svc-wzxds
Feb 16 14:12:56.503: INFO: Got endpoints: latency-svc-wzxds [1.752557426s]
Feb 16 14:12:56.558: INFO: Created: latency-svc-2qqqz
Feb 16 14:12:56.562: INFO: Got endpoints: latency-svc-2qqqz [1.663025361s]
Feb 16 14:12:56.671: INFO: Created: latency-svc-wvhtg
Feb 16 14:12:56.681: INFO: Got endpoints: latency-svc-wvhtg [1.59502542s]
Feb 16 14:12:56.721: INFO: Created: latency-svc-zgjwd
Feb 16 14:12:56.733: INFO: Got endpoints: latency-svc-zgjwd [1.480022858s]
Feb 16 14:12:56.875: INFO: Created: latency-svc-cmpzz
Feb 16 14:12:56.888: INFO: Got endpoints: latency-svc-cmpzz [1.575797311s]
Feb 16 14:12:56.962: INFO: Created: latency-svc-b5q6v
Feb 16 14:12:57.008: INFO: Got endpoints: latency-svc-b5q6v [1.468668597s]
Feb 16 14:12:57.049: INFO: Created: latency-svc-8h2f7
Feb 16 14:12:57.099: INFO: Got endpoints: latency-svc-8h2f7 [1.502810101s]
Feb 16 14:12:57.100: INFO: Created: latency-svc-xq8tf
Feb 16 14:12:57.191: INFO: Got endpoints: latency-svc-xq8tf [1.465038966s]
Feb 16 14:12:57.238: INFO: Created: latency-svc-jvv6x
Feb 16 14:12:57.245: INFO: Got endpoints: latency-svc-jvv6x [1.478820111s]
Feb 16 14:12:57.370: INFO: Created: latency-svc-hmqbg
Feb 16 14:12:57.385: INFO: Got endpoints: latency-svc-hmqbg [1.46309909s]
Feb 16 14:12:57.462: INFO: Created: latency-svc-k2br4
Feb 16 14:12:57.592: INFO: Got endpoints: latency-svc-k2br4 [1.613050825s]
Feb 16 14:12:57.626: INFO: Created: latency-svc-sffcm
Feb 16 14:12:57.634: INFO: Got endpoints: latency-svc-sffcm [1.535320089s]
Feb 16 14:12:57.692: INFO: Created: latency-svc-pfvn7
Feb 16 14:12:57.752: INFO: Got endpoints: latency-svc-pfvn7 [1.611227176s]
Feb 16 14:12:57.853: INFO: Created: latency-svc-twrld
Feb 16 14:12:57.853: INFO: Got endpoints: latency-svc-twrld [1.525424026s]
Feb 16 14:12:57.993: INFO: Created: latency-svc-sqc97
Feb 16 14:12:57.995: INFO: Got endpoints: latency-svc-sqc97 [1.59175656s]
Feb 16 14:12:58.029: INFO: Created: latency-svc-hx6ws
Feb 16 14:12:58.050: INFO: Got endpoints: latency-svc-hx6ws [1.547159049s]
Feb 16 14:12:58.138: INFO: Created: latency-svc-gpbrf
Feb 16 14:12:58.159: INFO: Got endpoints: latency-svc-gpbrf [1.596130153s]
Feb 16 14:12:58.367: INFO: Created: latency-svc-nb9dw
Feb 16 14:12:58.378: INFO: Got endpoints: latency-svc-nb9dw [1.695990725s]
Feb 16 14:12:58.425: INFO: Created: latency-svc-vz8mk
Feb 16 14:12:58.439: INFO: Got endpoints: latency-svc-vz8mk [1.705723887s]
Feb 16 14:12:58.521: INFO: Created: latency-svc-h9dlf
Feb 16 14:12:58.532: INFO: Got endpoints: latency-svc-h9dlf [1.643288915s]
Feb 16 14:12:58.584: INFO: Created: latency-svc-kbdfv
Feb 16 14:12:58.600: INFO: Got endpoints: latency-svc-kbdfv [1.591408806s]
Feb 16 14:12:58.707: INFO: Created: latency-svc-sc2d4
Feb 16 14:12:58.709: INFO: Got endpoints: latency-svc-sc2d4 [1.609926427s]
Feb 16 14:12:58.775: INFO: Created: latency-svc-66t7m
Feb 16 14:12:58.818: INFO: Got endpoints: latency-svc-66t7m [1.626213142s]
Feb 16 14:12:58.879: INFO: Created: latency-svc-8xg82
Feb 16 14:12:59.000: INFO: Created: latency-svc-rtqkz
Feb 16 14:12:59.010: INFO: Got endpoints: latency-svc-8xg82 [1.764988574s]
Feb 16 14:12:59.012: INFO: Got endpoints: latency-svc-rtqkz [1.62731833s]
Feb 16 14:12:59.071: INFO: Created: latency-svc-k4vzb
Feb 16 14:12:59.091: INFO: Got endpoints: latency-svc-k4vzb [1.499314766s]
Feb 16 14:12:59.195: INFO: Created: latency-svc-d9s2p
Feb 16 14:12:59.209: INFO: Got endpoints: latency-svc-d9s2p [1.574990742s]
Feb 16 14:12:59.335: INFO: Created: latency-svc-kk77m
Feb 16 14:12:59.379: INFO: Got endpoints: latency-svc-kk77m [1.626686561s]
Feb 16 14:12:59.386: INFO: Created: latency-svc-p6tzl
Feb 16 14:12:59.391: INFO: Got endpoints: latency-svc-p6tzl [1.53817564s]
Feb 16 14:12:59.502: INFO: Created: latency-svc-rmqpw
Feb 16 14:12:59.510: INFO: Got endpoints: latency-svc-rmqpw [1.515512116s]
Feb 16 14:12:59.549: INFO: Created: latency-svc-mztbn
Feb 16 14:12:59.559: INFO: Got endpoints: latency-svc-mztbn [1.508275861s]
Feb 16 14:12:59.691: INFO: Created: latency-svc-hcpw4
Feb 16 14:12:59.702: INFO: Got endpoints: latency-svc-hcpw4 [1.542598482s]
Feb 16 14:12:59.766: INFO: Created: latency-svc-qkhkp
Feb 16 14:12:59.892: INFO: Created: latency-svc-v4qf6
Feb 16 14:12:59.896: INFO: Got endpoints: latency-svc-qkhkp [1.518425414s]
Feb 16 14:12:59.905: INFO: Got endpoints: latency-svc-v4qf6 [1.465135534s]
Feb 16 14:12:59.976: INFO: Created: latency-svc-ft9gn
Feb 16 14:12:59.997: INFO: Got endpoints: latency-svc-ft9gn [1.465268542s]
Feb 16 14:13:00.097: INFO: Created: latency-svc-rk57t
Feb 16 14:13:00.097: INFO: Got endpoints: latency-svc-rk57t [1.496160867s]
Feb 16 14:13:00.140: INFO: Created: latency-svc-xjsth
Feb 16 14:13:00.212: INFO: Got endpoints: latency-svc-xjsth [1.503501997s]
Feb 16 14:13:00.275: INFO: Created: latency-svc-4cg47
Feb 16 14:13:00.291: INFO: Got endpoints: latency-svc-4cg47 [1.472655374s]
Feb 16 14:13:00.397: INFO: Created: latency-svc-6r6zw
Feb 16 14:13:00.527: INFO: Got endpoints: latency-svc-6r6zw [1.51431973s]
Feb 16 14:13:00.625: INFO: Created: latency-svc-4jx94
Feb 16 14:13:00.667: INFO: Got endpoints: latency-svc-4jx94 [1.65631985s]
Feb 16 14:13:00.714: INFO: Created: latency-svc-xb2z2
Feb 16 14:13:00.744: INFO: Got endpoints: latency-svc-xb2z2 [1.652219219s]
Feb 16 14:13:00.915: INFO: Created: latency-svc-sftgs
Feb 16 14:13:00.915: INFO: Got endpoints: latency-svc-sftgs [1.706168415s]
Feb 16 14:13:01.076: INFO: Created: latency-svc-tstdl
Feb 16 14:13:01.085: INFO: Got endpoints: latency-svc-tstdl [1.705376554s]
Feb 16 14:13:01.121: INFO: Created: latency-svc-kkj8q
Feb 16 14:13:01.191: INFO: Got endpoints: latency-svc-kkj8q [1.800035175s]
Feb 16 14:13:01.215: INFO: Created: latency-svc-4kcs6
Feb 16 14:13:01.276: INFO: Got endpoints: latency-svc-4kcs6 [1.765578802s]
Feb 16 14:13:01.384: INFO: Created: latency-svc-w5gx9
Feb 16 14:13:01.417: INFO: Got endpoints: latency-svc-w5gx9 [1.85850329s]
Feb 16 14:13:01.450: INFO: Created: latency-svc-n8tqt
Feb 16 14:13:01.480: INFO: Got endpoints: latency-svc-n8tqt [1.778105516s]
Feb 16 14:13:01.615: INFO: Created: latency-svc-s42cz
Feb 16 14:13:01.657: INFO: Got endpoints: latency-svc-s42cz [1.760923211s]
Feb 16 14:13:01.684: INFO: Created: latency-svc-dwtpr
Feb 16 14:13:01.691: INFO: Got endpoints: latency-svc-dwtpr [1.78583804s]
Feb 16 14:13:01.789: INFO: Created: latency-svc-nn5n8
Feb 16 14:13:01.809: INFO: Got endpoints: latency-svc-nn5n8 [1.811180856s]
Feb 16 14:13:01.926: INFO: Created: latency-svc-995hq
Feb 16 14:13:01.932: INFO: Got endpoints: latency-svc-995hq [1.835433125s]
Feb 16 14:13:01.979: INFO: Created: latency-svc-n7rrg
Feb 16 14:13:02.021: INFO: Got endpoints: latency-svc-n7rrg [1.808055183s]
Feb 16 14:13:02.094: INFO: Created: latency-svc-vlvbs
Feb 16 14:13:02.125: INFO: Got endpoints: latency-svc-vlvbs [1.832993077s]
Feb 16 14:13:02.191: INFO: Created: latency-svc-t54gf
Feb 16 14:13:02.274: INFO: Got endpoints: latency-svc-t54gf [1.747151186s]
Feb 16 14:13:02.347: INFO: Created: latency-svc-clmdg
Feb 16 14:13:02.347: INFO: Got endpoints: latency-svc-clmdg [1.680013125s]
Feb 16 14:13:02.449: INFO: Created: latency-svc-jn2vz
Feb 16 14:13:02.501: INFO: Got endpoints: latency-svc-jn2vz [1.757018794s]
Feb 16 14:13:02.540: INFO: Created: latency-svc-tn48q
Feb 16 14:13:02.612: INFO: Got endpoints: latency-svc-tn48q [1.697123709s]
Feb 16 14:13:02.667: INFO: Created: latency-svc-88k76
Feb 16 14:13:02.682: INFO: Got endpoints: latency-svc-88k76 [1.597049419s]
Feb 16 14:13:02.682: INFO: Latencies: [76.501933ms 212.11355ms 238.317323ms 406.918174ms 443.40684ms 509.026205ms 558.977046ms 694.779809ms 746.339299ms 993.98487ms 1.295906437s 1.300885303s 1.308729573s 1.374745032s 1.410510445s 1.419403776s 1.42737678s 1.432342052s 1.441731207s 1.461488185s 1.46309909s 1.465038966s 1.465135534s 1.465268542s 1.468668597s 1.470431215s 1.472655374s 1.477812021s 1.478820111s 1.480022858s 1.487028905s 1.496160867s 1.499314766s 1.502810101s 1.503501997s 1.507145087s 1.508275861s 1.512652592s 1.51431973s 1.515419887s 1.515512116s 1.518425414s 1.525355868s 1.525424026s 1.527321249s 1.535320089s 1.536701709s 1.53817564s 1.539673651s 1.542598482s 1.545953798s 1.547159049s 1.574990742s 1.575797311s 1.579416692s 1.584897334s 1.588579858s 1.591408806s 1.59175656s 1.59502542s 1.596130153s 1.597049419s 1.598315686s 1.604790464s 1.605357886s 1.606111139s 1.609926427s 1.611227176s 1.613050825s 1.613309525s 1.613459013s 1.618174768s 1.618825131s 1.622839541s 1.623415656s 1.626213142s 1.626686561s 1.62731833s 1.632883409s 1.634270099s 1.639856901s 1.640346656s 1.642380155s 1.643288915s 1.652219219s 1.655275228s 1.65631985s 1.663025361s 1.664718278s 1.671268195s 1.679706156s 1.680013125s 1.682087722s 1.687130781s 1.688355378s 1.689560361s 1.695990725s 1.696246827s 1.697123709s 1.700069065s 1.703045421s 1.705376554s 1.705723887s 1.706001749s 1.706168415s 1.706368492s 1.708667446s 1.70911852s 1.709918295s 1.710649597s 1.714264555s 1.715919081s 1.72530181s 1.747151186s 1.752258284s 1.752557426s 1.757018794s 1.760923211s 1.764988574s 1.765578802s 1.774896861s 1.776163944s 1.776842724s 1.778105516s 1.779595952s 1.782562083s 1.78583804s 1.800035175s 1.808055183s 1.808687282s 1.809868329s 1.811180856s 1.815206783s 1.816184451s 1.817198689s 1.829129886s 1.832993077s 1.834016192s 1.835433125s 1.846474718s 1.85850329s 1.86118002s 1.875515845s 1.890730474s 1.899680085s 1.917200252s 1.92286678s 1.944604163s 1.954455122s 1.971707551s 1.981929267s 1.987408901s 1.990068023s 1.998050129s 2.00738657s 2.015226336s 2.015820473s 2.040764792s 2.051912006s 2.059666573s 2.083757046s 2.094505423s 2.149745783s 2.161489197s 2.178496425s 2.233974001s 2.255405983s 2.265077316s 2.277652473s 2.29228116s 2.300051634s 2.322124738s 2.336881256s 2.349877408s 2.353005551s 2.407325907s 2.421232883s 2.442764863s 2.456181708s 2.460177614s 2.471294544s 2.47495209s 2.48912112s 2.50114221s 2.53584142s 2.540578571s 2.54330727s 2.54678042s 2.551534357s 2.564886545s 2.599932044s 2.623856228s 2.650604246s 2.675661419s 2.686226299s 2.716399882s 2.727096278s 2.75100983s 2.823636398s 2.827578648s]
Feb 16 14:13:02.683: INFO: 50 %ile: 1.703045421s
Feb 16 14:13:02.683: INFO: 90 %ile: 2.471294544s
Feb 16 14:13:02.683: INFO: 99 %ile: 2.823636398s
Feb 16 14:13:02.683: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:13:02.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-9948" for this suite.
Feb 16 14:13:50.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:13:50.928: INFO: namespace svc-latency-9948 deletion completed in 48.236703663s

• [SLOW TEST:81.097 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:13:50.928: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 16 14:13:59.284: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:13:59.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1609" for this suite.
Feb 16 14:14:05.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:14:05.647: INFO: namespace container-runtime-1609 deletion completed in 6.281581098s

• [SLOW TEST:14.719 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:14:05.647: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Feb 16 14:14:05.709: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix932106562/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:14:05.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7977" for this suite.
Feb 16 14:14:11.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:14:11.921: INFO: namespace kubectl-7977 deletion completed in 6.14761685s

• [SLOW TEST:6.274 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:14:11.922: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 16 14:14:20.271: INFO: Waiting up to 5m0s for pod "client-envvars-3e83784b-1275-4ad1-a211-0201eb0e4382" in namespace "pods-1596" to be "success or failure"
Feb 16 14:14:20.393: INFO: Pod "client-envvars-3e83784b-1275-4ad1-a211-0201eb0e4382": Phase="Pending", Reason="", readiness=false. Elapsed: 122.271095ms
Feb 16 14:14:22.401: INFO: Pod "client-envvars-3e83784b-1275-4ad1-a211-0201eb0e4382": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130163272s
Feb 16 14:14:24.411: INFO: Pod "client-envvars-3e83784b-1275-4ad1-a211-0201eb0e4382": Phase="Pending", Reason="", readiness=false. Elapsed: 4.139811593s
Feb 16 14:14:26.419: INFO: Pod "client-envvars-3e83784b-1275-4ad1-a211-0201eb0e4382": Phase="Pending", Reason="", readiness=false. Elapsed: 6.147985806s
Feb 16 14:14:28.428: INFO: Pod "client-envvars-3e83784b-1275-4ad1-a211-0201eb0e4382": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.15775115s
STEP: Saw pod success
Feb 16 14:14:28.429: INFO: Pod "client-envvars-3e83784b-1275-4ad1-a211-0201eb0e4382" satisfied condition "success or failure"
Feb 16 14:14:28.434: INFO: Trying to get logs from node iruya-node pod client-envvars-3e83784b-1275-4ad1-a211-0201eb0e4382 container env3cont: 
STEP: delete the pod
Feb 16 14:14:28.521: INFO: Waiting for pod client-envvars-3e83784b-1275-4ad1-a211-0201eb0e4382 to disappear
Feb 16 14:14:28.526: INFO: Pod client-envvars-3e83784b-1275-4ad1-a211-0201eb0e4382 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:14:28.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1596" for this suite.
Feb 16 14:15:20.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:15:20.682: INFO: namespace pods-1596 deletion completed in 52.149935863s

• [SLOW TEST:68.761 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:15:20.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-1981
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Feb 16 14:15:20.851: INFO: Found 0 stateful pods, waiting for 3
Feb 16 14:15:31.011: INFO: Found 2 stateful pods, waiting for 3
Feb 16 14:15:40.891: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 16 14:15:40.891: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 16 14:15:40.892: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 16 14:15:50.863: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 16 14:15:50.863: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 16 14:15:50.863: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb 16 14:15:50.968: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Feb 16 14:16:01.023: INFO: Updating stateful set ss2
Feb 16 14:16:01.090: INFO: Waiting for Pod statefulset-1981/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Feb 16 14:16:11.525: INFO: Found 2 stateful pods, waiting for 3
Feb 16 14:16:21.636: INFO: Found 2 stateful pods, waiting for 3
Feb 16 14:16:31.535: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 16 14:16:31.535: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 16 14:16:31.535: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Feb 16 14:16:31.570: INFO: Updating stateful set ss2
Feb 16 14:16:31.618: INFO: Waiting for Pod statefulset-1981/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 16 14:16:41.631: INFO: Waiting for Pod statefulset-1981/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 16 14:16:51.709: INFO: Updating stateful set ss2
Feb 16 14:16:51.817: INFO: Waiting for StatefulSet statefulset-1981/ss2 to complete update
Feb 16 14:16:51.817: INFO: Waiting for Pod statefulset-1981/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 16 14:17:01.835: INFO: Waiting for StatefulSet statefulset-1981/ss2 to complete update
Feb 16 14:17:01.835: INFO: Waiting for Pod statefulset-1981/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 16 14:17:11.835: INFO: Waiting for StatefulSet statefulset-1981/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb 16 14:17:21.834: INFO: Deleting all statefulset in ns statefulset-1981
Feb 16 14:17:21.876: INFO: Scaling statefulset ss2 to 0
Feb 16 14:18:01.930: INFO: Waiting for statefulset status.replicas updated to 0
Feb 16 14:18:01.940: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:18:01.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1981" for this suite.
Feb 16 14:18:08.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:18:08.090: INFO: namespace statefulset-1981 deletion completed in 6.108329729s

• [SLOW TEST:167.408 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:18:08.091: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Feb 16 14:18:08.170: INFO: Waiting up to 5m0s for pod "client-containers-d0b546ac-0699-4fb4-90f7-7134768f104a" in namespace "containers-3301" to be "success or failure"
Feb 16 14:18:08.172: INFO: Pod "client-containers-d0b546ac-0699-4fb4-90f7-7134768f104a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.463396ms
Feb 16 14:18:10.193: INFO: Pod "client-containers-d0b546ac-0699-4fb4-90f7-7134768f104a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023285612s
Feb 16 14:18:12.213: INFO: Pod "client-containers-d0b546ac-0699-4fb4-90f7-7134768f104a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043381046s
Feb 16 14:18:14.220: INFO: Pod "client-containers-d0b546ac-0699-4fb4-90f7-7134768f104a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050598782s
Feb 16 14:18:16.228: INFO: Pod "client-containers-d0b546ac-0699-4fb4-90f7-7134768f104a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057988167s
Feb 16 14:18:18.235: INFO: Pod "client-containers-d0b546ac-0699-4fb4-90f7-7134768f104a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.064781218s
STEP: Saw pod success
Feb 16 14:18:18.235: INFO: Pod "client-containers-d0b546ac-0699-4fb4-90f7-7134768f104a" satisfied condition "success or failure"
Feb 16 14:18:18.238: INFO: Trying to get logs from node iruya-node pod client-containers-d0b546ac-0699-4fb4-90f7-7134768f104a container test-container: 
STEP: delete the pod
Feb 16 14:18:18.488: INFO: Waiting for pod client-containers-d0b546ac-0699-4fb4-90f7-7134768f104a to disappear
Feb 16 14:18:18.494: INFO: Pod client-containers-d0b546ac-0699-4fb4-90f7-7134768f104a no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:18:18.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3301" for this suite.
Feb 16 14:18:24.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:18:24.640: INFO: namespace containers-3301 deletion completed in 6.139538606s

• [SLOW TEST:16.549 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:18:24.640: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-3083/configmap-test-ee02da79-da87-43fd-95ed-6e876a396d96
STEP: Creating a pod to test consume configMaps
Feb 16 14:18:24.742: INFO: Waiting up to 5m0s for pod "pod-configmaps-5e8b8719-58b4-4773-9060-84e19a8cb167" in namespace "configmap-3083" to be "success or failure"
Feb 16 14:18:24.746: INFO: Pod "pod-configmaps-5e8b8719-58b4-4773-9060-84e19a8cb167": Phase="Pending", Reason="", readiness=false. Elapsed: 3.37187ms
Feb 16 14:18:26.751: INFO: Pod "pod-configmaps-5e8b8719-58b4-4773-9060-84e19a8cb167": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00898375s
Feb 16 14:18:28.757: INFO: Pod "pod-configmaps-5e8b8719-58b4-4773-9060-84e19a8cb167": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015059719s
Feb 16 14:18:31.177: INFO: Pod "pod-configmaps-5e8b8719-58b4-4773-9060-84e19a8cb167": Phase="Pending", Reason="", readiness=false. Elapsed: 6.434294326s
Feb 16 14:18:33.185: INFO: Pod "pod-configmaps-5e8b8719-58b4-4773-9060-84e19a8cb167": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.443114792s
STEP: Saw pod success
Feb 16 14:18:33.185: INFO: Pod "pod-configmaps-5e8b8719-58b4-4773-9060-84e19a8cb167" satisfied condition "success or failure"
Feb 16 14:18:33.189: INFO: Trying to get logs from node iruya-node pod pod-configmaps-5e8b8719-58b4-4773-9060-84e19a8cb167 container env-test: 
STEP: delete the pod
Feb 16 14:18:33.451: INFO: Waiting for pod pod-configmaps-5e8b8719-58b4-4773-9060-84e19a8cb167 to disappear
Feb 16 14:18:33.476: INFO: Pod pod-configmaps-5e8b8719-58b4-4773-9060-84e19a8cb167 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:18:33.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3083" for this suite.
Feb 16 14:18:39.521: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:18:39.643: INFO: namespace configmap-3083 deletion completed in 6.157626804s

• [SLOW TEST:15.003 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:18:39.644: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-70674261-ebe8-463b-9523-b57475058595
STEP: Creating secret with name s-test-opt-upd-801c138b-6e68-4f2a-ba13-38f2e1ffcec3
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-70674261-ebe8-463b-9523-b57475058595
STEP: Updating secret s-test-opt-upd-801c138b-6e68-4f2a-ba13-38f2e1ffcec3
STEP: Creating secret with name s-test-opt-create-a85b4c30-d568-42a0-9b25-db65642d053b
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:18:56.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6736" for this suite.
Feb 16 14:19:18.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:19:18.207: INFO: namespace projected-6736 deletion completed in 22.09949473s

• [SLOW TEST:38.563 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:19:18.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-3510
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-3510
STEP: Creating statefulset with conflicting port in namespace statefulset-3510
STEP: Waiting until pod test-pod will start running in namespace statefulset-3510
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3510
Feb 16 14:19:28.411: INFO: Observed stateful pod in namespace: statefulset-3510, name: ss-0, uid: e34b4744-362d-42d4-a172-a2bdac8fedd8, status phase: Pending. Waiting for statefulset controller to delete.
Feb 16 14:19:36.494: INFO: Observed stateful pod in namespace: statefulset-3510, name: ss-0, uid: e34b4744-362d-42d4-a172-a2bdac8fedd8, status phase: Failed. Waiting for statefulset controller to delete.
Feb 16 14:19:36.531: INFO: Observed stateful pod in namespace: statefulset-3510, name: ss-0, uid: e34b4744-362d-42d4-a172-a2bdac8fedd8, status phase: Failed. Waiting for statefulset controller to delete.
Feb 16 14:19:36.544: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3510
STEP: Removing pod with conflicting port in namespace statefulset-3510
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3510 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb 16 14:19:46.895: INFO: Deleting all statefulset in ns statefulset-3510
Feb 16 14:19:46.901: INFO: Scaling statefulset ss to 0
Feb 16 14:19:56.947: INFO: Waiting for statefulset status.replicas updated to 0
Feb 16 14:19:56.951: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:19:56.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3510" for this suite.
Feb 16 14:20:03.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:20:03.122: INFO: namespace statefulset-3510 deletion completed in 6.140052944s

• [SLOW TEST:44.916 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:20:03.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-8849
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 16 14:20:03.184: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 16 14:20:41.404: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-8849 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 16 14:20:41.405: INFO: >>> kubeConfig: /root/.kube/config
I0216 14:20:41.488652       8 log.go:172] (0xc0007d02c0) (0xc0013c4d20) Create stream
I0216 14:20:41.488712       8 log.go:172] (0xc0007d02c0) (0xc0013c4d20) Stream added, broadcasting: 1
I0216 14:20:41.500322       8 log.go:172] (0xc0007d02c0) Reply frame received for 1
I0216 14:20:41.500385       8 log.go:172] (0xc0007d02c0) (0xc0011f28c0) Create stream
I0216 14:20:41.500405       8 log.go:172] (0xc0007d02c0) (0xc0011f28c0) Stream added, broadcasting: 3
I0216 14:20:41.502771       8 log.go:172] (0xc0007d02c0) Reply frame received for 3
I0216 14:20:41.502818       8 log.go:172] (0xc0007d02c0) (0xc0013c4f00) Create stream
I0216 14:20:41.502832       8 log.go:172] (0xc0007d02c0) (0xc0013c4f00) Stream added, broadcasting: 5
I0216 14:20:41.506701       8 log.go:172] (0xc0007d02c0) Reply frame received for 5
I0216 14:20:41.714234       8 log.go:172] (0xc0007d02c0) Data frame received for 3
I0216 14:20:41.714282       8 log.go:172] (0xc0011f28c0) (3) Data frame handling
I0216 14:20:41.714304       8 log.go:172] (0xc0011f28c0) (3) Data frame sent
I0216 14:20:41.924532       8 log.go:172] (0xc0007d02c0) Data frame received for 1
I0216 14:20:41.924631       8 log.go:172] (0xc0007d02c0) (0xc0011f28c0) Stream removed, broadcasting: 3
I0216 14:20:41.924687       8 log.go:172] (0xc0013c4d20) (1) Data frame handling
I0216 14:20:41.924702       8 log.go:172] (0xc0013c4d20) (1) Data frame sent
I0216 14:20:41.924723       8 log.go:172] (0xc0007d02c0) (0xc0013c4f00) Stream removed, broadcasting: 5
I0216 14:20:41.924778       8 log.go:172] (0xc0007d02c0) (0xc0013c4d20) Stream removed, broadcasting: 1
I0216 14:20:41.925039       8 log.go:172] (0xc0007d02c0) Go away received
I0216 14:20:41.926272       8 log.go:172] (0xc0007d02c0) (0xc0013c4d20) Stream removed, broadcasting: 1
I0216 14:20:41.926379       8 log.go:172] (0xc0007d02c0) (0xc0011f28c0) Stream removed, broadcasting: 3
I0216 14:20:41.926435       8 log.go:172] (0xc0007d02c0) (0xc0013c4f00) Stream removed, broadcasting: 5
Feb 16 14:20:41.926: INFO: Waiting for endpoints: map[]
Feb 16 14:20:41.935: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-8849 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 16 14:20:41.935: INFO: >>> kubeConfig: /root/.kube/config
I0216 14:20:41.990881       8 log.go:172] (0xc000460e70) (0xc00278a140) Create stream
I0216 14:20:41.990934       8 log.go:172] (0xc000460e70) (0xc00278a140) Stream added, broadcasting: 1
I0216 14:20:41.997254       8 log.go:172] (0xc000460e70) Reply frame received for 1
I0216 14:20:41.997281       8 log.go:172] (0xc000460e70) (0xc0011f2aa0) Create stream
I0216 14:20:41.997289       8 log.go:172] (0xc000460e70) (0xc0011f2aa0) Stream added, broadcasting: 3
I0216 14:20:41.998792       8 log.go:172] (0xc000460e70) Reply frame received for 3
I0216 14:20:41.998828       8 log.go:172] (0xc000460e70) (0xc0011f2c80) Create stream
I0216 14:20:41.998838       8 log.go:172] (0xc000460e70) (0xc0011f2c80) Stream added, broadcasting: 5
I0216 14:20:42.000227       8 log.go:172] (0xc000460e70) Reply frame received for 5
I0216 14:20:42.171967       8 log.go:172] (0xc000460e70) Data frame received for 3
I0216 14:20:42.172048       8 log.go:172] (0xc0011f2aa0) (3) Data frame handling
I0216 14:20:42.172083       8 log.go:172] (0xc0011f2aa0) (3) Data frame sent
I0216 14:20:42.346215       8 log.go:172] (0xc000460e70) Data frame received for 1
I0216 14:20:42.346310       8 log.go:172] (0xc000460e70) (0xc0011f2aa0) Stream removed, broadcasting: 3
I0216 14:20:42.346362       8 log.go:172] (0xc00278a140) (1) Data frame handling
I0216 14:20:42.346380       8 log.go:172] (0xc00278a140) (1) Data frame sent
I0216 14:20:42.346403       8 log.go:172] (0xc000460e70) (0xc00278a140) Stream removed, broadcasting: 1
I0216 14:20:42.346807       8 log.go:172] (0xc000460e70) (0xc0011f2c80) Stream removed, broadcasting: 5
I0216 14:20:42.346847       8 log.go:172] (0xc000460e70) Go away received
I0216 14:20:42.346877       8 log.go:172] (0xc000460e70) (0xc00278a140) Stream removed, broadcasting: 1
I0216 14:20:42.346895       8 log.go:172] (0xc000460e70) (0xc0011f2aa0) Stream removed, broadcasting: 3
I0216 14:20:42.346914       8 log.go:172] (0xc000460e70) (0xc0011f2c80) Stream removed, broadcasting: 5
Feb 16 14:20:42.347: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:20:42.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8849" for this suite.
Feb 16 14:21:07.128: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:21:07.274: INFO: namespace pod-network-test-8849 deletion completed in 24.201911736s

• [SLOW TEST:64.152 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:21:07.275: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-5242
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 16 14:21:07.385: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 16 14:21:41.594: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-5242 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 16 14:21:41.594: INFO: >>> kubeConfig: /root/.kube/config
I0216 14:21:41.672784       8 log.go:172] (0xc000ffce70) (0xc000a24000) Create stream
I0216 14:21:41.672846       8 log.go:172] (0xc000ffce70) (0xc000a24000) Stream added, broadcasting: 1
I0216 14:21:41.683649       8 log.go:172] (0xc000ffce70) Reply frame received for 1
I0216 14:21:41.683747       8 log.go:172] (0xc000ffce70) (0xc00087e820) Create stream
I0216 14:21:41.683786       8 log.go:172] (0xc000ffce70) (0xc00087e820) Stream added, broadcasting: 3
I0216 14:21:41.687702       8 log.go:172] (0xc000ffce70) Reply frame received for 3
I0216 14:21:41.687840       8 log.go:172] (0xc000ffce70) (0xc000a241e0) Create stream
I0216 14:21:41.687856       8 log.go:172] (0xc000ffce70) (0xc000a241e0) Stream added, broadcasting: 5
I0216 14:21:41.691289       8 log.go:172] (0xc000ffce70) Reply frame received for 5
I0216 14:21:41.867071       8 log.go:172] (0xc000ffce70) Data frame received for 3
I0216 14:21:41.867207       8 log.go:172] (0xc00087e820) (3) Data frame handling
I0216 14:21:41.867286       8 log.go:172] (0xc00087e820) (3) Data frame sent
I0216 14:21:42.064027       8 log.go:172] (0xc000ffce70) Data frame received for 1
I0216 14:21:42.064241       8 log.go:172] (0xc000ffce70) (0xc00087e820) Stream removed, broadcasting: 3
I0216 14:21:42.064388       8 log.go:172] (0xc000a24000) (1) Data frame handling
I0216 14:21:42.064553       8 log.go:172] (0xc000a24000) (1) Data frame sent
I0216 14:21:42.064581       8 log.go:172] (0xc000ffce70) (0xc000a241e0) Stream removed, broadcasting: 5
I0216 14:21:42.064607       8 log.go:172] (0xc000ffce70) (0xc000a24000) Stream removed, broadcasting: 1
I0216 14:21:42.064630       8 log.go:172] (0xc000ffce70) Go away received
I0216 14:21:42.065258       8 log.go:172] (0xc000ffce70) (0xc000a24000) Stream removed, broadcasting: 1
I0216 14:21:42.065433       8 log.go:172] (0xc000ffce70) (0xc00087e820) Stream removed, broadcasting: 3
I0216 14:21:42.065469       8 log.go:172] (0xc000ffce70) (0xc000a241e0) Stream removed, broadcasting: 5
Feb 16 14:21:42.065: INFO: Waiting for endpoints: map[]
Feb 16 14:21:42.078: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-5242 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 16 14:21:42.078: INFO: >>> kubeConfig: /root/.kube/config
I0216 14:21:42.158421       8 log.go:172] (0xc001002630) (0xc002dc4a00) Create stream
I0216 14:21:42.158580       8 log.go:172] (0xc001002630) (0xc002dc4a00) Stream added, broadcasting: 1
I0216 14:21:42.173040       8 log.go:172] (0xc001002630) Reply frame received for 1
I0216 14:21:42.173405       8 log.go:172] (0xc001002630) (0xc0013659a0) Create stream
I0216 14:21:42.173679       8 log.go:172] (0xc001002630) (0xc0013659a0) Stream added, broadcasting: 3
I0216 14:21:42.181275       8 log.go:172] (0xc001002630) Reply frame received for 3
I0216 14:21:42.181345       8 log.go:172] (0xc001002630) (0xc00087e8c0) Create stream
I0216 14:21:42.181376       8 log.go:172] (0xc001002630) (0xc00087e8c0) Stream added, broadcasting: 5
I0216 14:21:42.186223       8 log.go:172] (0xc001002630) Reply frame received for 5
I0216 14:21:42.360876       8 log.go:172] (0xc001002630) Data frame received for 3
I0216 14:21:42.360929       8 log.go:172] (0xc0013659a0) (3) Data frame handling
I0216 14:21:42.360950       8 log.go:172] (0xc0013659a0) (3) Data frame sent
I0216 14:21:42.500752       8 log.go:172] (0xc001002630) Data frame received for 1
I0216 14:21:42.500870       8 log.go:172] (0xc001002630) (0xc0013659a0) Stream removed, broadcasting: 3
I0216 14:21:42.500921       8 log.go:172] (0xc002dc4a00) (1) Data frame handling
I0216 14:21:42.500947       8 log.go:172] (0xc002dc4a00) (1) Data frame sent
I0216 14:21:42.501013       8 log.go:172] (0xc001002630) (0xc00087e8c0) Stream removed, broadcasting: 5
I0216 14:21:42.501050       8 log.go:172] (0xc001002630) (0xc002dc4a00) Stream removed, broadcasting: 1
I0216 14:21:42.501064       8 log.go:172] (0xc001002630) Go away received
I0216 14:21:42.502114       8 log.go:172] (0xc001002630) (0xc002dc4a00) Stream removed, broadcasting: 1
I0216 14:21:42.502147       8 log.go:172] (0xc001002630) (0xc0013659a0) Stream removed, broadcasting: 3
I0216 14:21:42.502154       8 log.go:172] (0xc001002630) (0xc00087e8c0) Stream removed, broadcasting: 5
Feb 16 14:21:42.502: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:21:42.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5242" for this suite.
Feb 16 14:22:06.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:22:06.730: INFO: namespace pod-network-test-5242 deletion completed in 24.215365298s

• [SLOW TEST:59.455 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:22:06.730: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 16 14:22:14.970: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:22:14.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7785" for this suite.
Feb 16 14:22:21.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:22:21.147: INFO: namespace container-runtime-7785 deletion completed in 6.144681483s

• [SLOW TEST:14.417 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:22:21.148: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Feb 16 14:22:21.218: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 16 14:22:21.231: INFO: Waiting for terminating namespaces to be deleted...
Feb 16 14:22:21.270: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Feb 16 14:22:21.291: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Feb 16 14:22:21.291: INFO: 	Container weave ready: true, restart count 0
Feb 16 14:22:21.291: INFO: 	Container weave-npc ready: true, restart count 0
Feb 16 14:22:21.291: INFO: kube-bench-j7kcs from default started at 2020-02-11 06:42:30 +0000 UTC (1 container statuses recorded)
Feb 16 14:22:21.291: INFO: 	Container kube-bench ready: false, restart count 0
Feb 16 14:22:21.291: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Feb 16 14:22:21.291: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 16 14:22:21.291: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Feb 16 14:22:21.313: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Feb 16 14:22:21.313: INFO: 	Container kube-controller-manager ready: true, restart count 21
Feb 16 14:22:21.313: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Feb 16 14:22:21.313: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 16 14:22:21.313: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Feb 16 14:22:21.313: INFO: 	Container kube-apiserver ready: true, restart count 0
Feb 16 14:22:21.313: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Feb 16 14:22:21.313: INFO: 	Container kube-scheduler ready: true, restart count 13
Feb 16 14:22:21.313: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb 16 14:22:21.313: INFO: 	Container coredns ready: true, restart count 0
Feb 16 14:22:21.313: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Feb 16 14:22:21.313: INFO: 	Container etcd ready: true, restart count 0
Feb 16 14:22:21.313: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Feb 16 14:22:21.313: INFO: 	Container weave ready: true, restart count 0
Feb 16 14:22:21.313: INFO: 	Container weave-npc ready: true, restart count 0
Feb 16 14:22:21.313: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb 16 14:22:21.313: INFO: 	Container coredns ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-node
STEP: verifying the node has the label node iruya-server-sfge57q7djm7
Feb 16 14:22:21.468: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb 16 14:22:21.468: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb 16 14:22:21.468: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Feb 16 14:22:21.468: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7
Feb 16 14:22:21.468: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7
Feb 16 14:22:21.468: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Feb 16 14:22:21.468: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node
Feb 16 14:22:21.468: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb 16 14:22:21.468: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7
Feb 16 14:22:21.468: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-791b6e5e-d688-49c9-b227-70d4df9d346c.15f3e7f6d6cc7c13], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6879/filler-pod-791b6e5e-d688-49c9-b227-70d4df9d346c to iruya-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-791b6e5e-d688-49c9-b227-70d4df9d346c.15f3e7f7fe991113], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-791b6e5e-d688-49c9-b227-70d4df9d346c.15f3e7f9049d1b71], Reason = [Created], Message = [Created container filler-pod-791b6e5e-d688-49c9-b227-70d4df9d346c]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-791b6e5e-d688-49c9-b227-70d4df9d346c.15f3e7f932f429f2], Reason = [Started], Message = [Started container filler-pod-791b6e5e-d688-49c9-b227-70d4df9d346c]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ac968edd-1bb1-4dfe-a025-d4d7a368a08d.15f3e7f6db8a036f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6879/filler-pod-ac968edd-1bb1-4dfe-a025-d4d7a368a08d to iruya-server-sfge57q7djm7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ac968edd-1bb1-4dfe-a025-d4d7a368a08d.15f3e7f81e5d81c9], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ac968edd-1bb1-4dfe-a025-d4d7a368a08d.15f3e7f9575696d9], Reason = [Created], Message = [Created container filler-pod-ac968edd-1bb1-4dfe-a025-d4d7a368a08d]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ac968edd-1bb1-4dfe-a025-d4d7a368a08d.15f3e7f97136caa4], Reason = [Started], Message = [Started container filler-pod-ac968edd-1bb1-4dfe-a025-d4d7a368a08d]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15f3e7f9aa5e1f23], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-server-sfge57q7djm7
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:22:34.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-6879" for this suite.
Feb 16 14:22:40.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:22:40.927: INFO: namespace sched-pred-6879 deletion completed in 6.133609093s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:19.779 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:22:40.927: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-ac4d6bfe-ca97-4892-9cd7-20eb9c3ef4c4
STEP: Creating a pod to test consume secrets
Feb 16 14:22:42.690: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9ae18231-0162-4868-89fb-c7293df698c8" in namespace "projected-4384" to be "success or failure"
Feb 16 14:22:42.721: INFO: Pod "pod-projected-secrets-9ae18231-0162-4868-89fb-c7293df698c8": Phase="Pending", Reason="", readiness=false. Elapsed: 30.400006ms
Feb 16 14:22:45.132: INFO: Pod "pod-projected-secrets-9ae18231-0162-4868-89fb-c7293df698c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.441354865s
Feb 16 14:22:47.143: INFO: Pod "pod-projected-secrets-9ae18231-0162-4868-89fb-c7293df698c8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.452219014s
Feb 16 14:22:49.151: INFO: Pod "pod-projected-secrets-9ae18231-0162-4868-89fb-c7293df698c8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.459839204s
Feb 16 14:22:51.158: INFO: Pod "pod-projected-secrets-9ae18231-0162-4868-89fb-c7293df698c8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.467352328s
Feb 16 14:22:53.170: INFO: Pod "pod-projected-secrets-9ae18231-0162-4868-89fb-c7293df698c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.478980475s
STEP: Saw pod success
Feb 16 14:22:53.170: INFO: Pod "pod-projected-secrets-9ae18231-0162-4868-89fb-c7293df698c8" satisfied condition "success or failure"
Feb 16 14:22:53.178: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-9ae18231-0162-4868-89fb-c7293df698c8 container projected-secret-volume-test: 
STEP: delete the pod
Feb 16 14:22:53.297: INFO: Waiting for pod pod-projected-secrets-9ae18231-0162-4868-89fb-c7293df698c8 to disappear
Feb 16 14:22:53.335: INFO: Pod pod-projected-secrets-9ae18231-0162-4868-89fb-c7293df698c8 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:22:53.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4384" for this suite.
Feb 16 14:22:59.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:22:59.482: INFO: namespace projected-4384 deletion completed in 6.13869567s

• [SLOW TEST:18.555 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:22:59.483: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Feb 16 14:22:59.584: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6867" to be "success or failure"
Feb 16 14:22:59.592: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.16837ms
Feb 16 14:23:01.607: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022942302s
Feb 16 14:23:03.630: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045745961s
Feb 16 14:23:05.640: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055572311s
Feb 16 14:23:07.646: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061742272s
Feb 16 14:23:09.658: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 10.074044791s
Feb 16 14:23:11.675: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.090259097s
STEP: Saw pod success
Feb 16 14:23:11.675: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Feb 16 14:23:11.682: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Feb 16 14:23:11.804: INFO: Waiting for pod pod-host-path-test to disappear
Feb 16 14:23:11.831: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:23:11.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-6867" for this suite.
Feb 16 14:23:17.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:23:18.052: INFO: namespace hostpath-6867 deletion completed in 6.211214766s

• [SLOW TEST:18.569 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:23:18.053: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 16 14:23:18.156: INFO: Waiting up to 5m0s for pod "pod-bfa5525d-db87-4267-9f6d-f1428cbd7fc2" in namespace "emptydir-9580" to be "success or failure"
Feb 16 14:23:18.162: INFO: Pod "pod-bfa5525d-db87-4267-9f6d-f1428cbd7fc2": Phase="Pending", Reason="", readiness=false. Elapsed: 5.905452ms
Feb 16 14:23:20.177: INFO: Pod "pod-bfa5525d-db87-4267-9f6d-f1428cbd7fc2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020835554s
Feb 16 14:23:22.186: INFO: Pod "pod-bfa5525d-db87-4267-9f6d-f1428cbd7fc2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029860687s
Feb 16 14:23:24.197: INFO: Pod "pod-bfa5525d-db87-4267-9f6d-f1428cbd7fc2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041695204s
Feb 16 14:23:26.209: INFO: Pod "pod-bfa5525d-db87-4267-9f6d-f1428cbd7fc2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052872397s
STEP: Saw pod success
Feb 16 14:23:26.209: INFO: Pod "pod-bfa5525d-db87-4267-9f6d-f1428cbd7fc2" satisfied condition "success or failure"
Feb 16 14:23:26.215: INFO: Trying to get logs from node iruya-node pod pod-bfa5525d-db87-4267-9f6d-f1428cbd7fc2 container test-container: 
STEP: delete the pod
Feb 16 14:23:26.294: INFO: Waiting for pod pod-bfa5525d-db87-4267-9f6d-f1428cbd7fc2 to disappear
Feb 16 14:23:26.334: INFO: Pod pod-bfa5525d-db87-4267-9f6d-f1428cbd7fc2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:23:26.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9580" for this suite.
Feb 16 14:23:34.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:23:34.572: INFO: namespace emptydir-9580 deletion completed in 8.169403643s

• [SLOW TEST:16.520 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:23:34.573: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-7df95eb0-4ec0-4b51-8159-aa53e08863d2
STEP: Creating a pod to test consume secrets
Feb 16 14:23:34.668: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3e188fbf-daff-48dc-900f-c48cf8414750" in namespace "projected-7214" to be "success or failure"
Feb 16 14:23:34.681: INFO: Pod "pod-projected-secrets-3e188fbf-daff-48dc-900f-c48cf8414750": Phase="Pending", Reason="", readiness=false. Elapsed: 12.325254ms
Feb 16 14:23:36.688: INFO: Pod "pod-projected-secrets-3e188fbf-daff-48dc-900f-c48cf8414750": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019914448s
Feb 16 14:23:38.697: INFO: Pod "pod-projected-secrets-3e188fbf-daff-48dc-900f-c48cf8414750": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028539573s
Feb 16 14:23:40.708: INFO: Pod "pod-projected-secrets-3e188fbf-daff-48dc-900f-c48cf8414750": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03937346s
Feb 16 14:23:42.735: INFO: Pod "pod-projected-secrets-3e188fbf-daff-48dc-900f-c48cf8414750": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.066220185s
STEP: Saw pod success
Feb 16 14:23:42.735: INFO: Pod "pod-projected-secrets-3e188fbf-daff-48dc-900f-c48cf8414750" satisfied condition "success or failure"
Feb 16 14:23:42.739: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-3e188fbf-daff-48dc-900f-c48cf8414750 container projected-secret-volume-test: 
STEP: delete the pod
Feb 16 14:23:42.967: INFO: Waiting for pod pod-projected-secrets-3e188fbf-daff-48dc-900f-c48cf8414750 to disappear
Feb 16 14:23:42.977: INFO: Pod pod-projected-secrets-3e188fbf-daff-48dc-900f-c48cf8414750 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:23:42.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7214" for this suite.
Feb 16 14:23:49.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:23:49.143: INFO: namespace projected-7214 deletion completed in 6.159623469s

• [SLOW TEST:14.571 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:23:49.144: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Feb 16 14:23:49.196: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6525,SelfLink:/api/v1/namespaces/watch-6525/configmaps/e2e-watch-test-configmap-a,UID:df12c190-3dd7-4eca-8d99-4b0ac5fc1b06,ResourceVersion:24583698,Generation:0,CreationTimestamp:2020-02-16 14:23:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 16 14:23:49.196: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6525,SelfLink:/api/v1/namespaces/watch-6525/configmaps/e2e-watch-test-configmap-a,UID:df12c190-3dd7-4eca-8d99-4b0ac5fc1b06,ResourceVersion:24583698,Generation:0,CreationTimestamp:2020-02-16 14:23:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Feb 16 14:23:59.213: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6525,SelfLink:/api/v1/namespaces/watch-6525/configmaps/e2e-watch-test-configmap-a,UID:df12c190-3dd7-4eca-8d99-4b0ac5fc1b06,ResourceVersion:24583712,Generation:0,CreationTimestamp:2020-02-16 14:23:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb 16 14:23:59.213: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6525,SelfLink:/api/v1/namespaces/watch-6525/configmaps/e2e-watch-test-configmap-a,UID:df12c190-3dd7-4eca-8d99-4b0ac5fc1b06,ResourceVersion:24583712,Generation:0,CreationTimestamp:2020-02-16 14:23:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Feb 16 14:24:09.236: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6525,SelfLink:/api/v1/namespaces/watch-6525/configmaps/e2e-watch-test-configmap-a,UID:df12c190-3dd7-4eca-8d99-4b0ac5fc1b06,ResourceVersion:24583726,Generation:0,CreationTimestamp:2020-02-16 14:23:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 16 14:24:09.237: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6525,SelfLink:/api/v1/namespaces/watch-6525/configmaps/e2e-watch-test-configmap-a,UID:df12c190-3dd7-4eca-8d99-4b0ac5fc1b06,ResourceVersion:24583726,Generation:0,CreationTimestamp:2020-02-16 14:23:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Feb 16 14:24:19.253: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6525,SelfLink:/api/v1/namespaces/watch-6525/configmaps/e2e-watch-test-configmap-a,UID:df12c190-3dd7-4eca-8d99-4b0ac5fc1b06,ResourceVersion:24583741,Generation:0,CreationTimestamp:2020-02-16 14:23:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 16 14:24:19.253: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6525,SelfLink:/api/v1/namespaces/watch-6525/configmaps/e2e-watch-test-configmap-a,UID:df12c190-3dd7-4eca-8d99-4b0ac5fc1b06,ResourceVersion:24583741,Generation:0,CreationTimestamp:2020-02-16 14:23:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Feb 16 14:24:29.282: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-6525,SelfLink:/api/v1/namespaces/watch-6525/configmaps/e2e-watch-test-configmap-b,UID:ec9ef4b4-9d09-4547-8ed2-e5da3ef29e14,ResourceVersion:24583755,Generation:0,CreationTimestamp:2020-02-16 14:24:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 16 14:24:29.282: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-6525,SelfLink:/api/v1/namespaces/watch-6525/configmaps/e2e-watch-test-configmap-b,UID:ec9ef4b4-9d09-4547-8ed2-e5da3ef29e14,ResourceVersion:24583755,Generation:0,CreationTimestamp:2020-02-16 14:24:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Feb 16 14:24:39.295: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-6525,SelfLink:/api/v1/namespaces/watch-6525/configmaps/e2e-watch-test-configmap-b,UID:ec9ef4b4-9d09-4547-8ed2-e5da3ef29e14,ResourceVersion:24583769,Generation:0,CreationTimestamp:2020-02-16 14:24:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 16 14:24:39.295: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-6525,SelfLink:/api/v1/namespaces/watch-6525/configmaps/e2e-watch-test-configmap-b,UID:ec9ef4b4-9d09-4547-8ed2-e5da3ef29e14,ResourceVersion:24583769,Generation:0,CreationTimestamp:2020-02-16 14:24:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:24:49.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6525" for this suite.
Feb 16 14:24:55.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:24:55.490: INFO: namespace watch-6525 deletion completed in 6.185520054s

• [SLOW TEST:66.346 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:24:55.491: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:25:03.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1827" for this suite.
Feb 16 14:26:07.692: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:26:07.835: INFO: namespace kubelet-test-1827 deletion completed in 1m4.172116408s

• [SLOW TEST:72.344 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:26:07.836: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Feb 16 14:26:07.966: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Feb 16 14:26:08.751: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Feb 16 14:26:11.161: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717459968, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717459968, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717459968, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717459968, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 16 14:26:13.169: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717459968, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717459968, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717459968, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717459968, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 16 14:26:15.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717459968, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717459968, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717459968, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717459968, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 16 14:26:17.170: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717459968, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717459968, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717459968, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717459968, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 16 14:26:19.168: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717459968, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717459968, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717459968, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717459968, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 16 14:26:21.169: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717459968, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717459968, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717459968, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717459968, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 16 14:26:23.181: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717459968, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717459968, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717459968, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717459968, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 16 14:26:25.175: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717459968, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717459968, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717459968, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717459968, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 16 14:26:29.372: INFO: Waited 1.715609781s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:26:31.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-6727" for this suite.
Feb 16 14:26:39.165: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:26:39.320: INFO: namespace aggregator-6727 deletion completed in 8.232253173s

• [SLOW TEST:31.484 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:26:39.320: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-bc7682ec-9ab1-4389-a2a7-42c86613ac10
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-bc7682ec-9ab1-4389-a2a7-42c86613ac10
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:28:21.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1434" for this suite.
Feb 16 14:28:43.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:28:43.928: INFO: namespace configmap-1434 deletion completed in 22.150715927s

• [SLOW TEST:124.608 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:28:43.930: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-359977fa-d617-4b48-a4f3-7950ee722b98
STEP: Creating a pod to test consume configMaps
Feb 16 14:28:44.171: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8147d411-a97f-4563-8f9b-c6873ea696eb" in namespace "projected-8285" to be "success or failure"
Feb 16 14:28:44.618: INFO: Pod "pod-projected-configmaps-8147d411-a97f-4563-8f9b-c6873ea696eb": Phase="Pending", Reason="", readiness=false. Elapsed: 446.264945ms
Feb 16 14:28:46.624: INFO: Pod "pod-projected-configmaps-8147d411-a97f-4563-8f9b-c6873ea696eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.452435077s
Feb 16 14:28:48.637: INFO: Pod "pod-projected-configmaps-8147d411-a97f-4563-8f9b-c6873ea696eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.465227456s
Feb 16 14:28:50.659: INFO: Pod "pod-projected-configmaps-8147d411-a97f-4563-8f9b-c6873ea696eb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.487478401s
Feb 16 14:28:52.917: INFO: Pod "pod-projected-configmaps-8147d411-a97f-4563-8f9b-c6873ea696eb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.745309798s
Feb 16 14:28:54.926: INFO: Pod "pod-projected-configmaps-8147d411-a97f-4563-8f9b-c6873ea696eb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.754537225s
Feb 16 14:28:56.940: INFO: Pod "pod-projected-configmaps-8147d411-a97f-4563-8f9b-c6873ea696eb": Phase="Pending", Reason="", readiness=false. Elapsed: 12.76856329s
Feb 16 14:28:58.962: INFO: Pod "pod-projected-configmaps-8147d411-a97f-4563-8f9b-c6873ea696eb": Phase="Pending", Reason="", readiness=false. Elapsed: 14.790641044s
Feb 16 14:29:00.977: INFO: Pod "pod-projected-configmaps-8147d411-a97f-4563-8f9b-c6873ea696eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.805896836s
STEP: Saw pod success
Feb 16 14:29:00.977: INFO: Pod "pod-projected-configmaps-8147d411-a97f-4563-8f9b-c6873ea696eb" satisfied condition "success or failure"
Feb 16 14:29:00.983: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-8147d411-a97f-4563-8f9b-c6873ea696eb container projected-configmap-volume-test: 
STEP: delete the pod
Feb 16 14:29:01.178: INFO: Waiting for pod pod-projected-configmaps-8147d411-a97f-4563-8f9b-c6873ea696eb to disappear
Feb 16 14:29:01.188: INFO: Pod pod-projected-configmaps-8147d411-a97f-4563-8f9b-c6873ea696eb no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:29:01.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8285" for this suite.
Feb 16 14:29:07.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:29:07.376: INFO: namespace projected-8285 deletion completed in 6.181777251s

• [SLOW TEST:23.446 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:29:07.377: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 16 14:29:07.571: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3e750ba9-d696-42c8-950a-732ee5c63b66" in namespace "projected-9666" to be "success or failure"
Feb 16 14:29:07.596: INFO: Pod "downwardapi-volume-3e750ba9-d696-42c8-950a-732ee5c63b66": Phase="Pending", Reason="", readiness=false. Elapsed: 24.802513ms
Feb 16 14:29:09.730: INFO: Pod "downwardapi-volume-3e750ba9-d696-42c8-950a-732ee5c63b66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.158787667s
Feb 16 14:29:11.750: INFO: Pod "downwardapi-volume-3e750ba9-d696-42c8-950a-732ee5c63b66": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178208285s
Feb 16 14:29:13.760: INFO: Pod "downwardapi-volume-3e750ba9-d696-42c8-950a-732ee5c63b66": Phase="Pending", Reason="", readiness=false. Elapsed: 6.188334467s
Feb 16 14:29:15.770: INFO: Pod "downwardapi-volume-3e750ba9-d696-42c8-950a-732ee5c63b66": Phase="Pending", Reason="", readiness=false. Elapsed: 8.198915531s
Feb 16 14:29:17.860: INFO: Pod "downwardapi-volume-3e750ba9-d696-42c8-950a-732ee5c63b66": Phase="Pending", Reason="", readiness=false. Elapsed: 10.288276567s
Feb 16 14:29:19.872: INFO: Pod "downwardapi-volume-3e750ba9-d696-42c8-950a-732ee5c63b66": Phase="Pending", Reason="", readiness=false. Elapsed: 12.300741559s
Feb 16 14:29:22.050: INFO: Pod "downwardapi-volume-3e750ba9-d696-42c8-950a-732ee5c63b66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.478237384s
STEP: Saw pod success
Feb 16 14:29:22.050: INFO: Pod "downwardapi-volume-3e750ba9-d696-42c8-950a-732ee5c63b66" satisfied condition "success or failure"
Feb 16 14:29:22.055: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-3e750ba9-d696-42c8-950a-732ee5c63b66 container client-container: 
STEP: delete the pod
Feb 16 14:29:22.326: INFO: Waiting for pod downwardapi-volume-3e750ba9-d696-42c8-950a-732ee5c63b66 to disappear
Feb 16 14:29:22.343: INFO: Pod downwardapi-volume-3e750ba9-d696-42c8-950a-732ee5c63b66 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:29:22.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9666" for this suite.
Feb 16 14:29:30.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:29:30.686: INFO: namespace projected-9666 deletion completed in 8.282368327s

• [SLOW TEST:23.309 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:29:30.687: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:31:03.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4857" for this suite.
Feb 16 14:31:11.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:31:12.107: INFO: namespace container-runtime-4857 deletion completed in 8.271359541s

• [SLOW TEST:101.420 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:31:12.108: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 16 14:31:12.412: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 23.58625ms)
Feb 16 14:31:12.440: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 28.410783ms)
Feb 16 14:31:12.445: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.914879ms)
Feb 16 14:31:12.450: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.159905ms)
Feb 16 14:31:12.456: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.991629ms)
Feb 16 14:31:12.461: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.369724ms)
Feb 16 14:31:12.466: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.755178ms)
Feb 16 14:31:12.470: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.281191ms)
Feb 16 14:31:12.498: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 27.996034ms)
Feb 16 14:31:13.199: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 700.28453ms)
Feb 16 14:31:13.250: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 51.780137ms)
Feb 16 14:31:13.265: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.440188ms)
Feb 16 14:31:13.272: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.686663ms)
Feb 16 14:31:13.280: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.214264ms)
Feb 16 14:31:13.313: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 32.755173ms)
Feb 16 14:31:13.322: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.662286ms)
Feb 16 14:31:13.328: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.516488ms)
Feb 16 14:31:13.335: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.460845ms)
Feb 16 14:31:13.340: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.251674ms)
Feb 16 14:31:13.350: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.830227ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:31:13.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-278" for this suite.
Feb 16 14:31:19.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:31:19.537: INFO: namespace proxy-278 deletion completed in 6.181096921s

• [SLOW TEST:7.429 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:31:19.539: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Feb 16 14:31:19.751: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Feb 16 14:31:19.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-939'
Feb 16 14:31:22.584: INFO: stderr: ""
Feb 16 14:31:22.584: INFO: stdout: "service/redis-slave created\n"
Feb 16 14:31:22.585: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Feb 16 14:31:22.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-939'
Feb 16 14:31:24.095: INFO: stderr: ""
Feb 16 14:31:24.095: INFO: stdout: "service/redis-master created\n"
Feb 16 14:31:24.096: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Feb 16 14:31:24.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-939'
Feb 16 14:31:24.799: INFO: stderr: ""
Feb 16 14:31:24.799: INFO: stdout: "service/frontend created\n"
Feb 16 14:31:24.799: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Feb 16 14:31:24.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-939'
Feb 16 14:31:25.428: INFO: stderr: ""
Feb 16 14:31:25.428: INFO: stdout: "deployment.apps/frontend created\n"
Feb 16 14:31:25.428: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb 16 14:31:25.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-939'
Feb 16 14:31:25.860: INFO: stderr: ""
Feb 16 14:31:25.860: INFO: stdout: "deployment.apps/redis-master created\n"
Feb 16 14:31:25.861: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Feb 16 14:31:25.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-939'
Feb 16 14:31:29.609: INFO: stderr: ""
Feb 16 14:31:29.609: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Feb 16 14:31:29.609: INFO: Waiting for all frontend pods to be Running.
Feb 16 14:32:14.662: INFO: Waiting for frontend to serve content.
Feb 16 14:32:14.838: INFO: Trying to add a new entry to the guestbook.
Feb 16 14:32:14.888: INFO: Verifying that added entry can be retrieved.
Feb 16 14:32:17.430: INFO: Failed to get response from guestbook. err: , response: {"data": ""}
STEP: using delete to clean up resources
Feb 16 14:32:22.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-939'
Feb 16 14:32:22.773: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 16 14:32:22.773: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Feb 16 14:32:22.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-939'
Feb 16 14:32:23.214: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 16 14:32:23.214: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb 16 14:32:23.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-939'
Feb 16 14:32:23.427: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 16 14:32:23.427: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb 16 14:32:23.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-939'
Feb 16 14:32:23.568: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 16 14:32:23.568: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb 16 14:32:23.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-939'
Feb 16 14:32:25.121: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 16 14:32:25.121: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb 16 14:32:25.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-939'
Feb 16 14:32:28.132: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 16 14:32:28.132: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:32:28.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-939" for this suite.
Feb 16 14:33:19.592: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:33:19.756: INFO: namespace kubectl-939 deletion completed in 50.833590231s

• [SLOW TEST:120.217 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:33:19.757: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 16 14:33:19.979: INFO: Waiting up to 5m0s for pod "downwardapi-volume-317aa580-d15b-4bef-bf96-3e6fda0978ba" in namespace "projected-6419" to be "success or failure"
Feb 16 14:33:19.991: INFO: Pod "downwardapi-volume-317aa580-d15b-4bef-bf96-3e6fda0978ba": Phase="Pending", Reason="", readiness=false. Elapsed: 11.183053ms
Feb 16 14:33:22.000: INFO: Pod "downwardapi-volume-317aa580-d15b-4bef-bf96-3e6fda0978ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020760869s
Feb 16 14:33:24.008: INFO: Pod "downwardapi-volume-317aa580-d15b-4bef-bf96-3e6fda0978ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028857517s
Feb 16 14:33:26.025: INFO: Pod "downwardapi-volume-317aa580-d15b-4bef-bf96-3e6fda0978ba": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045189047s
Feb 16 14:33:28.041: INFO: Pod "downwardapi-volume-317aa580-d15b-4bef-bf96-3e6fda0978ba": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061892987s
Feb 16 14:33:30.049: INFO: Pod "downwardapi-volume-317aa580-d15b-4bef-bf96-3e6fda0978ba": Phase="Pending", Reason="", readiness=false. Elapsed: 10.069913734s
Feb 16 14:33:32.057: INFO: Pod "downwardapi-volume-317aa580-d15b-4bef-bf96-3e6fda0978ba": Phase="Pending", Reason="", readiness=false. Elapsed: 12.077138435s
Feb 16 14:33:34.072: INFO: Pod "downwardapi-volume-317aa580-d15b-4bef-bf96-3e6fda0978ba": Phase="Pending", Reason="", readiness=false. Elapsed: 14.092393841s
Feb 16 14:33:36.080: INFO: Pod "downwardapi-volume-317aa580-d15b-4bef-bf96-3e6fda0978ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.100104973s
STEP: Saw pod success
Feb 16 14:33:36.080: INFO: Pod "downwardapi-volume-317aa580-d15b-4bef-bf96-3e6fda0978ba" satisfied condition "success or failure"
Feb 16 14:33:36.083: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-317aa580-d15b-4bef-bf96-3e6fda0978ba container client-container: 
STEP: delete the pod
Feb 16 14:33:36.382: INFO: Waiting for pod downwardapi-volume-317aa580-d15b-4bef-bf96-3e6fda0978ba to disappear
Feb 16 14:33:36.418: INFO: Pod downwardapi-volume-317aa580-d15b-4bef-bf96-3e6fda0978ba no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:33:36.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6419" for this suite.
Feb 16 14:33:42.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:33:42.583: INFO: namespace projected-6419 deletion completed in 6.135506906s

• [SLOW TEST:22.827 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:33:42.584: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-spm8
STEP: Creating a pod to test atomic-volume-subpath
Feb 16 14:33:43.248: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-spm8" in namespace "subpath-9925" to be "success or failure"
Feb 16 14:33:43.261: INFO: Pod "pod-subpath-test-configmap-spm8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.388845ms
Feb 16 14:33:45.274: INFO: Pod "pod-subpath-test-configmap-spm8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025717212s
Feb 16 14:33:47.286: INFO: Pod "pod-subpath-test-configmap-spm8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03776643s
Feb 16 14:33:49.306: INFO: Pod "pod-subpath-test-configmap-spm8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057325789s
Feb 16 14:33:51.372: INFO: Pod "pod-subpath-test-configmap-spm8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.123805872s
Feb 16 14:33:53.381: INFO: Pod "pod-subpath-test-configmap-spm8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.132018879s
Feb 16 14:33:55.391: INFO: Pod "pod-subpath-test-configmap-spm8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.142089508s
Feb 16 14:33:57.400: INFO: Pod "pod-subpath-test-configmap-spm8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.15152678s
Feb 16 14:33:59.408: INFO: Pod "pod-subpath-test-configmap-spm8": Phase="Running", Reason="", readiness=true. Elapsed: 16.159670109s
Feb 16 14:34:01.417: INFO: Pod "pod-subpath-test-configmap-spm8": Phase="Running", Reason="", readiness=true. Elapsed: 18.168403779s
Feb 16 14:34:03.427: INFO: Pod "pod-subpath-test-configmap-spm8": Phase="Running", Reason="", readiness=true. Elapsed: 20.178541923s
Feb 16 14:34:05.438: INFO: Pod "pod-subpath-test-configmap-spm8": Phase="Running", Reason="", readiness=true. Elapsed: 22.189220345s
Feb 16 14:34:07.448: INFO: Pod "pod-subpath-test-configmap-spm8": Phase="Running", Reason="", readiness=true. Elapsed: 24.199029696s
Feb 16 14:34:09.456: INFO: Pod "pod-subpath-test-configmap-spm8": Phase="Running", Reason="", readiness=true. Elapsed: 26.207856598s
Feb 16 14:34:11.465: INFO: Pod "pod-subpath-test-configmap-spm8": Phase="Running", Reason="", readiness=true. Elapsed: 28.216599436s
Feb 16 14:34:13.474: INFO: Pod "pod-subpath-test-configmap-spm8": Phase="Running", Reason="", readiness=true. Elapsed: 30.225233049s
Feb 16 14:34:15.487: INFO: Pod "pod-subpath-test-configmap-spm8": Phase="Running", Reason="", readiness=true. Elapsed: 32.238380323s
Feb 16 14:34:17.497: INFO: Pod "pod-subpath-test-configmap-spm8": Phase="Running", Reason="", readiness=true. Elapsed: 34.248712545s
Feb 16 14:34:19.505: INFO: Pod "pod-subpath-test-configmap-spm8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.256229743s
STEP: Saw pod success
Feb 16 14:34:19.505: INFO: Pod "pod-subpath-test-configmap-spm8" satisfied condition "success or failure"
Feb 16 14:34:19.510: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-spm8 container test-container-subpath-configmap-spm8: 
STEP: delete the pod
Feb 16 14:34:19.725: INFO: Waiting for pod pod-subpath-test-configmap-spm8 to disappear
Feb 16 14:34:19.735: INFO: Pod pod-subpath-test-configmap-spm8 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-spm8
Feb 16 14:34:19.735: INFO: Deleting pod "pod-subpath-test-configmap-spm8" in namespace "subpath-9925"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:34:19.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9925" for this suite.
Feb 16 14:34:25.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:34:25.925: INFO: namespace subpath-9925 deletion completed in 6.178321758s

• [SLOW TEST:43.341 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:34:25.925: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 16 14:34:26.186: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:34:28.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7575" for this suite.
Feb 16 14:34:38.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:34:38.719: INFO: namespace custom-resource-definition-7575 deletion completed in 10.147960475s

• [SLOW TEST:12.794 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:34:38.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb 16 14:34:55.561: INFO: Successfully updated pod "labelsupdate52557a56-16b0-4b2a-a24f-be355d50184b"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:34:57.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1415" for this suite.
Feb 16 14:35:19.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:35:19.851: INFO: namespace downward-api-1415 deletion completed in 22.166148625s

• [SLOW TEST:41.131 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:35:19.851: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb 16 14:35:20.044: INFO: Waiting up to 5m0s for pod "downward-api-88e880a6-eab4-4bb6-ae6b-e91630f439a7" in namespace "downward-api-8539" to be "success or failure"
Feb 16 14:35:20.186: INFO: Pod "downward-api-88e880a6-eab4-4bb6-ae6b-e91630f439a7": Phase="Pending", Reason="", readiness=false. Elapsed: 141.789173ms
Feb 16 14:35:22.204: INFO: Pod "downward-api-88e880a6-eab4-4bb6-ae6b-e91630f439a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.160184697s
Feb 16 14:35:24.228: INFO: Pod "downward-api-88e880a6-eab4-4bb6-ae6b-e91630f439a7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.184422293s
Feb 16 14:35:26.236: INFO: Pod "downward-api-88e880a6-eab4-4bb6-ae6b-e91630f439a7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.191627384s
Feb 16 14:35:28.252: INFO: Pod "downward-api-88e880a6-eab4-4bb6-ae6b-e91630f439a7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.2080997s
Feb 16 14:35:30.261: INFO: Pod "downward-api-88e880a6-eab4-4bb6-ae6b-e91630f439a7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.216544284s
Feb 16 14:35:32.267: INFO: Pod "downward-api-88e880a6-eab4-4bb6-ae6b-e91630f439a7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.222567145s
Feb 16 14:35:34.287: INFO: Pod "downward-api-88e880a6-eab4-4bb6-ae6b-e91630f439a7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.243405407s
Feb 16 14:35:36.295: INFO: Pod "downward-api-88e880a6-eab4-4bb6-ae6b-e91630f439a7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.250593053s
Feb 16 14:35:39.186: INFO: Pod "downward-api-88e880a6-eab4-4bb6-ae6b-e91630f439a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.141828077s
STEP: Saw pod success
Feb 16 14:35:39.186: INFO: Pod "downward-api-88e880a6-eab4-4bb6-ae6b-e91630f439a7" satisfied condition "success or failure"
Feb 16 14:35:39.454: INFO: Trying to get logs from node iruya-node pod downward-api-88e880a6-eab4-4bb6-ae6b-e91630f439a7 container dapi-container: 
STEP: delete the pod
Feb 16 14:35:40.640: INFO: Waiting for pod downward-api-88e880a6-eab4-4bb6-ae6b-e91630f439a7 to disappear
Feb 16 14:35:40.645: INFO: Pod downward-api-88e880a6-eab4-4bb6-ae6b-e91630f439a7 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:35:40.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8539" for this suite.
Feb 16 14:35:48.711: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:35:48.822: INFO: namespace downward-api-8539 deletion completed in 8.170195842s

• [SLOW TEST:28.972 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:35:48.824: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-f9840351-ac5c-4d87-aad0-2a8382804173 in namespace container-probe-9694
Feb 16 14:36:05.281: INFO: Started pod busybox-f9840351-ac5c-4d87-aad0-2a8382804173 in namespace container-probe-9694
STEP: checking the pod's current state and verifying that restartCount is present
Feb 16 14:36:05.286: INFO: Initial restart count of pod busybox-f9840351-ac5c-4d87-aad0-2a8382804173 is 0
Feb 16 14:36:57.347: INFO: Restart count of pod container-probe-9694/busybox-f9840351-ac5c-4d87-aad0-2a8382804173 is now 1 (52.061555004s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:36:57.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9694" for this suite.
Feb 16 14:37:05.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:37:05.572: INFO: namespace container-probe-9694 deletion completed in 8.165955246s

• [SLOW TEST:76.748 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:37:05.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:37:23.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8281" for this suite.
Feb 16 14:38:11.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:38:11.327: INFO: namespace kubelet-test-8281 deletion completed in 48.150477195s

• [SLOW TEST:65.755 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:38:11.328: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-94438605-4e4a-495c-9778-7c6d82d4bade
STEP: Creating configMap with name cm-test-opt-upd-4269e860-b5c7-42f1-9b3b-520f7cddbc27
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-94438605-4e4a-495c-9778-7c6d82d4bade
STEP: Updating configmap cm-test-opt-upd-4269e860-b5c7-42f1-9b3b-520f7cddbc27
STEP: Creating configMap with name cm-test-opt-create-82b14e83-7ca4-433d-9b82-de6f149d884e
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:38:39.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7405" for this suite.
Feb 16 14:39:03.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:39:03.769: INFO: namespace projected-7405 deletion completed in 23.773813261s

• [SLOW TEST:52.442 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:39:03.771: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-7156a30f-a394-4bb7-9e76-1d324c89668f
STEP: Creating a pod to test consume configMaps
Feb 16 14:39:04.011: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7fe79c9d-ee1a-4e03-888d-c73eb4277a9c" in namespace "projected-8647" to be "success or failure"
Feb 16 14:39:04.029: INFO: Pod "pod-projected-configmaps-7fe79c9d-ee1a-4e03-888d-c73eb4277a9c": Phase="Pending", Reason="", readiness=false. Elapsed: 17.582493ms
Feb 16 14:39:06.037: INFO: Pod "pod-projected-configmaps-7fe79c9d-ee1a-4e03-888d-c73eb4277a9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025881682s
Feb 16 14:39:08.045: INFO: Pod "pod-projected-configmaps-7fe79c9d-ee1a-4e03-888d-c73eb4277a9c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034002777s
Feb 16 14:39:10.055: INFO: Pod "pod-projected-configmaps-7fe79c9d-ee1a-4e03-888d-c73eb4277a9c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043537467s
Feb 16 14:39:12.073: INFO: Pod "pod-projected-configmaps-7fe79c9d-ee1a-4e03-888d-c73eb4277a9c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061936344s
Feb 16 14:39:14.147: INFO: Pod "pod-projected-configmaps-7fe79c9d-ee1a-4e03-888d-c73eb4277a9c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.135136638s
Feb 16 14:39:16.154: INFO: Pod "pod-projected-configmaps-7fe79c9d-ee1a-4e03-888d-c73eb4277a9c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.142630264s
Feb 16 14:39:18.163: INFO: Pod "pod-projected-configmaps-7fe79c9d-ee1a-4e03-888d-c73eb4277a9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.151607767s
STEP: Saw pod success
Feb 16 14:39:18.163: INFO: Pod "pod-projected-configmaps-7fe79c9d-ee1a-4e03-888d-c73eb4277a9c" satisfied condition "success or failure"
Feb 16 14:39:18.166: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-7fe79c9d-ee1a-4e03-888d-c73eb4277a9c container projected-configmap-volume-test: 
STEP: delete the pod
Feb 16 14:39:18.393: INFO: Waiting for pod pod-projected-configmaps-7fe79c9d-ee1a-4e03-888d-c73eb4277a9c to disappear
Feb 16 14:39:18.436: INFO: Pod pod-projected-configmaps-7fe79c9d-ee1a-4e03-888d-c73eb4277a9c no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:39:18.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8647" for this suite.
Feb 16 14:39:24.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:39:24.818: INFO: namespace projected-8647 deletion completed in 6.363588745s

• [SLOW TEST:21.047 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:39:24.818: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 16 14:39:25.473: INFO: Number of nodes with available pods: 0
Feb 16 14:39:25.473: INFO: Node iruya-node is running more than one daemon pod
Feb 16 14:39:26.488: INFO: Number of nodes with available pods: 0
Feb 16 14:39:26.488: INFO: Node iruya-node is running more than one daemon pod
Feb 16 14:39:28.555: INFO: Number of nodes with available pods: 0
Feb 16 14:39:28.555: INFO: Node iruya-node is running more than one daemon pod
Feb 16 14:39:29.502: INFO: Number of nodes with available pods: 0
Feb 16 14:39:29.502: INFO: Node iruya-node is running more than one daemon pod
Feb 16 14:39:30.499: INFO: Number of nodes with available pods: 0
Feb 16 14:39:30.499: INFO: Node iruya-node is running more than one daemon pod
Feb 16 14:39:31.485: INFO: Number of nodes with available pods: 0
Feb 16 14:39:31.485: INFO: Node iruya-node is running more than one daemon pod
Feb 16 14:39:32.483: INFO: Number of nodes with available pods: 0
Feb 16 14:39:32.483: INFO: Node iruya-node is running more than one daemon pod
Feb 16 14:39:33.490: INFO: Number of nodes with available pods: 0
Feb 16 14:39:33.490: INFO: Node iruya-node is running more than one daemon pod
Feb 16 14:39:35.798: INFO: Number of nodes with available pods: 0
Feb 16 14:39:35.798: INFO: Node iruya-node is running more than one daemon pod
Feb 16 14:39:37.210: INFO: Number of nodes with available pods: 0
Feb 16 14:39:37.210: INFO: Node iruya-node is running more than one daemon pod
Feb 16 14:39:37.519: INFO: Number of nodes with available pods: 0
Feb 16 14:39:37.519: INFO: Node iruya-node is running more than one daemon pod
Feb 16 14:39:38.798: INFO: Number of nodes with available pods: 0
Feb 16 14:39:38.798: INFO: Node iruya-node is running more than one daemon pod
Feb 16 14:39:39.486: INFO: Number of nodes with available pods: 0
Feb 16 14:39:39.486: INFO: Node iruya-node is running more than one daemon pod
Feb 16 14:39:40.488: INFO: Number of nodes with available pods: 1
Feb 16 14:39:40.488: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 16 14:39:41.486: INFO: Number of nodes with available pods: 2
Feb 16 14:39:41.486: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Feb 16 14:39:42.047: INFO: Number of nodes with available pods: 1
Feb 16 14:39:42.047: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 16 14:39:43.065: INFO: Number of nodes with available pods: 1
Feb 16 14:39:43.065: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 16 14:39:44.227: INFO: Number of nodes with available pods: 1
Feb 16 14:39:44.227: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 16 14:39:45.061: INFO: Number of nodes with available pods: 1
Feb 16 14:39:45.061: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 16 14:39:46.069: INFO: Number of nodes with available pods: 1
Feb 16 14:39:46.069: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 16 14:39:47.567: INFO: Number of nodes with available pods: 1
Feb 16 14:39:47.567: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 16 14:39:48.059: INFO: Number of nodes with available pods: 1
Feb 16 14:39:48.059: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 16 14:39:49.070: INFO: Number of nodes with available pods: 1
Feb 16 14:39:49.070: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 16 14:39:50.063: INFO: Number of nodes with available pods: 1
Feb 16 14:39:50.063: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 16 14:39:51.065: INFO: Number of nodes with available pods: 1
Feb 16 14:39:51.065: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 16 14:39:52.373: INFO: Number of nodes with available pods: 1
Feb 16 14:39:52.374: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 16 14:39:53.086: INFO: Number of nodes with available pods: 1
Feb 16 14:39:53.086: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 16 14:39:54.345: INFO: Number of nodes with available pods: 1
Feb 16 14:39:54.345: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 16 14:39:55.065: INFO: Number of nodes with available pods: 1
Feb 16 14:39:55.065: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 16 14:39:56.060: INFO: Number of nodes with available pods: 1
Feb 16 14:39:56.060: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 16 14:39:58.388: INFO: Number of nodes with available pods: 1
Feb 16 14:39:58.388: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 16 14:39:59.066: INFO: Number of nodes with available pods: 1
Feb 16 14:39:59.066: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 16 14:40:00.071: INFO: Number of nodes with available pods: 1
Feb 16 14:40:00.072: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 16 14:40:01.070: INFO: Number of nodes with available pods: 1
Feb 16 14:40:01.070: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 16 14:40:02.060: INFO: Number of nodes with available pods: 1
Feb 16 14:40:02.060: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 16 14:40:03.060: INFO: Number of nodes with available pods: 1
Feb 16 14:40:03.060: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 16 14:40:04.062: INFO: Number of nodes with available pods: 2
Feb 16 14:40:04.062: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-282, will wait for the garbage collector to delete the pods
Feb 16 14:40:04.134: INFO: Deleting DaemonSet.extensions daemon-set took: 13.093486ms
Feb 16 14:40:04.434: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.377809ms
Feb 16 14:40:16.137: INFO: Number of nodes with available pods: 0
Feb 16 14:40:16.137: INFO: Number of running nodes: 0, number of available pods: 0
Feb 16 14:40:16.139: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-282/daemonsets","resourceVersion":"24585714"},"items":null}

Feb 16 14:40:16.141: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-282/pods","resourceVersion":"24585714"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:40:16.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-282" for this suite.
Feb 16 14:40:24.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:40:24.328: INFO: namespace daemonsets-282 deletion completed in 8.173529253s

• [SLOW TEST:59.510 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:40:24.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Feb 16 14:40:24.743: INFO: Waiting up to 5m0s for pod "pod-7642ef41-7be1-402a-9792-0aed299721d8" in namespace "emptydir-1151" to be "success or failure"
Feb 16 14:40:24.805: INFO: Pod "pod-7642ef41-7be1-402a-9792-0aed299721d8": Phase="Pending", Reason="", readiness=false. Elapsed: 62.250983ms
Feb 16 14:40:26.813: INFO: Pod "pod-7642ef41-7be1-402a-9792-0aed299721d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07025345s
Feb 16 14:40:28.823: INFO: Pod "pod-7642ef41-7be1-402a-9792-0aed299721d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080055165s
Feb 16 14:40:30.835: INFO: Pod "pod-7642ef41-7be1-402a-9792-0aed299721d8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092224951s
Feb 16 14:40:32.851: INFO: Pod "pod-7642ef41-7be1-402a-9792-0aed299721d8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.107765392s
Feb 16 14:40:34.865: INFO: Pod "pod-7642ef41-7be1-402a-9792-0aed299721d8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.122286274s
Feb 16 14:40:36.877: INFO: Pod "pod-7642ef41-7be1-402a-9792-0aed299721d8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.133663212s
Feb 16 14:40:38.890: INFO: Pod "pod-7642ef41-7be1-402a-9792-0aed299721d8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.147327817s
Feb 16 14:40:40.899: INFO: Pod "pod-7642ef41-7be1-402a-9792-0aed299721d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.156265529s
STEP: Saw pod success
Feb 16 14:40:40.899: INFO: Pod "pod-7642ef41-7be1-402a-9792-0aed299721d8" satisfied condition "success or failure"
Feb 16 14:40:40.904: INFO: Trying to get logs from node iruya-node pod pod-7642ef41-7be1-402a-9792-0aed299721d8 container test-container: 
STEP: delete the pod
Feb 16 14:40:41.754: INFO: Waiting for pod pod-7642ef41-7be1-402a-9792-0aed299721d8 to disappear
Feb 16 14:40:41.778: INFO: Pod pod-7642ef41-7be1-402a-9792-0aed299721d8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:40:41.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1151" for this suite.
Feb 16 14:40:49.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:40:49.890: INFO: namespace emptydir-1151 deletion completed in 8.103882609s

• [SLOW TEST:25.561 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:40:49.890: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 16 14:40:50.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-6508'
Feb 16 14:40:50.298: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 16 14:40:50.298: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Feb 16 14:40:50.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-6508'
Feb 16 14:40:50.703: INFO: stderr: ""
Feb 16 14:40:50.703: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:40:50.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6508" for this suite.
Feb 16 14:41:15.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:41:15.317: INFO: namespace kubectl-6508 deletion completed in 24.602044371s

• [SLOW TEST:25.427 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:41:15.318: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-1388, will wait for the garbage collector to delete the pods
Feb 16 14:41:35.989: INFO: Deleting Job.batch foo took: 12.203352ms
Feb 16 14:41:36.490: INFO: Terminating Job.batch foo pods took: 500.619895ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:42:26.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-1388" for this suite.
Feb 16 14:42:34.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:42:34.946: INFO: namespace job-1388 deletion completed in 8.238205421s

• [SLOW TEST:79.628 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:42:34.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5800.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5800.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5800.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5800.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5800.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5800.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 16 14:42:57.443: INFO: Unable to read wheezy_udp@PodARecord from pod dns-5800/dns-test-2e81eefa-2310-4406-9c71-c07d35ae364d: the server could not find the requested resource (get pods dns-test-2e81eefa-2310-4406-9c71-c07d35ae364d)
Feb 16 14:42:57.448: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-5800/dns-test-2e81eefa-2310-4406-9c71-c07d35ae364d: the server could not find the requested resource (get pods dns-test-2e81eefa-2310-4406-9c71-c07d35ae364d)
Feb 16 14:42:57.453: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-5800.svc.cluster.local from pod dns-5800/dns-test-2e81eefa-2310-4406-9c71-c07d35ae364d: the server could not find the requested resource (get pods dns-test-2e81eefa-2310-4406-9c71-c07d35ae364d)
Feb 16 14:42:57.458: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-5800/dns-test-2e81eefa-2310-4406-9c71-c07d35ae364d: the server could not find the requested resource (get pods dns-test-2e81eefa-2310-4406-9c71-c07d35ae364d)
Feb 16 14:42:57.464: INFO: Unable to read jessie_udp@PodARecord from pod dns-5800/dns-test-2e81eefa-2310-4406-9c71-c07d35ae364d: the server could not find the requested resource (get pods dns-test-2e81eefa-2310-4406-9c71-c07d35ae364d)
Feb 16 14:42:57.472: INFO: Unable to read jessie_tcp@PodARecord from pod dns-5800/dns-test-2e81eefa-2310-4406-9c71-c07d35ae364d: the server could not find the requested resource (get pods dns-test-2e81eefa-2310-4406-9c71-c07d35ae364d)
Feb 16 14:42:57.472: INFO: Lookups using dns-5800/dns-test-2e81eefa-2310-4406-9c71-c07d35ae364d failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-5800.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb 16 14:43:03.600: INFO: DNS probes using dns-5800/dns-test-2e81eefa-2310-4406-9c71-c07d35ae364d succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:43:03.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5800" for this suite.
Feb 16 14:43:12.003: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:43:12.162: INFO: namespace dns-5800 deletion completed in 8.194862576s

• [SLOW TEST:37.212 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:43:12.163: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-2425626f-f40f-4d23-8bad-9a8e427d11d8
STEP: Creating a pod to test consume secrets
Feb 16 14:43:15.181: INFO: Waiting up to 5m0s for pod "pod-secrets-44a77ff7-f0fd-4260-a8e8-2cc967b05b77" in namespace "secrets-5217" to be "success or failure"
Feb 16 14:43:15.196: INFO: Pod "pod-secrets-44a77ff7-f0fd-4260-a8e8-2cc967b05b77": Phase="Pending", Reason="", readiness=false. Elapsed: 14.32569ms
Feb 16 14:43:17.202: INFO: Pod "pod-secrets-44a77ff7-f0fd-4260-a8e8-2cc967b05b77": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0208272s
Feb 16 14:43:19.213: INFO: Pod "pod-secrets-44a77ff7-f0fd-4260-a8e8-2cc967b05b77": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031173772s
Feb 16 14:43:21.224: INFO: Pod "pod-secrets-44a77ff7-f0fd-4260-a8e8-2cc967b05b77": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042284534s
Feb 16 14:43:23.294: INFO: Pod "pod-secrets-44a77ff7-f0fd-4260-a8e8-2cc967b05b77": Phase="Pending", Reason="", readiness=false. Elapsed: 8.112610494s
Feb 16 14:43:25.302: INFO: Pod "pod-secrets-44a77ff7-f0fd-4260-a8e8-2cc967b05b77": Phase="Pending", Reason="", readiness=false. Elapsed: 10.120612227s
Feb 16 14:43:27.313: INFO: Pod "pod-secrets-44a77ff7-f0fd-4260-a8e8-2cc967b05b77": Phase="Pending", Reason="", readiness=false. Elapsed: 12.13205884s
Feb 16 14:43:30.254: INFO: Pod "pod-secrets-44a77ff7-f0fd-4260-a8e8-2cc967b05b77": Phase="Pending", Reason="", readiness=false. Elapsed: 15.072578576s
Feb 16 14:43:32.266: INFO: Pod "pod-secrets-44a77ff7-f0fd-4260-a8e8-2cc967b05b77": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.084793508s
STEP: Saw pod success
Feb 16 14:43:32.266: INFO: Pod "pod-secrets-44a77ff7-f0fd-4260-a8e8-2cc967b05b77" satisfied condition "success or failure"
Feb 16 14:43:32.270: INFO: Trying to get logs from node iruya-node pod pod-secrets-44a77ff7-f0fd-4260-a8e8-2cc967b05b77 container secret-volume-test: 
STEP: delete the pod
Feb 16 14:43:32.432: INFO: Waiting for pod pod-secrets-44a77ff7-f0fd-4260-a8e8-2cc967b05b77 to disappear
Feb 16 14:43:32.449: INFO: Pod pod-secrets-44a77ff7-f0fd-4260-a8e8-2cc967b05b77 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:43:32.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5217" for this suite.
Feb 16 14:43:38.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:43:38.752: INFO: namespace secrets-5217 deletion completed in 6.296564345s
STEP: Destroying namespace "secret-namespace-6892" for this suite.
Feb 16 14:43:44.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:43:45.003: INFO: namespace secret-namespace-6892 deletion completed in 6.251401105s

• [SLOW TEST:32.840 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:43:45.003: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-2786
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-2786
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2786
Feb 16 14:43:45.403: INFO: Found 0 stateful pods, waiting for 1
Feb 16 14:43:55.412: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Feb 16 14:44:05.414: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Feb 16 14:44:05.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2786 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 16 14:44:08.634: INFO: stderr: "I0216 14:44:07.575934    2775 log.go:172] (0xc000116fd0) (0xc0006fa780) Create stream\nI0216 14:44:07.576012    2775 log.go:172] (0xc000116fd0) (0xc0006fa780) Stream added, broadcasting: 1\nI0216 14:44:07.585078    2775 log.go:172] (0xc000116fd0) Reply frame received for 1\nI0216 14:44:07.585168    2775 log.go:172] (0xc000116fd0) (0xc00059c280) Create stream\nI0216 14:44:07.585197    2775 log.go:172] (0xc000116fd0) (0xc00059c280) Stream added, broadcasting: 3\nI0216 14:44:07.587615    2775 log.go:172] (0xc000116fd0) Reply frame received for 3\nI0216 14:44:07.587689    2775 log.go:172] (0xc000116fd0) (0xc0006fa820) Create stream\nI0216 14:44:07.587718    2775 log.go:172] (0xc000116fd0) (0xc0006fa820) Stream added, broadcasting: 5\nI0216 14:44:07.594101    2775 log.go:172] (0xc000116fd0) Reply frame received for 5\nI0216 14:44:07.741716    2775 log.go:172] (0xc000116fd0) Data frame received for 5\nI0216 14:44:07.741830    2775 log.go:172] (0xc0006fa820) (5) Data frame handling\nI0216 14:44:07.741858    2775 log.go:172] (0xc0006fa820) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0216 14:44:08.358688    2775 log.go:172] (0xc000116fd0) Data frame received for 3\nI0216 14:44:08.358774    2775 log.go:172] (0xc00059c280) (3) Data frame handling\nI0216 14:44:08.358810    2775 log.go:172] (0xc00059c280) (3) Data frame sent\nI0216 14:44:08.617132    2775 log.go:172] (0xc000116fd0) Data frame received for 1\nI0216 14:44:08.617194    2775 log.go:172] (0xc0006fa780) (1) Data frame handling\nI0216 14:44:08.617218    2775 log.go:172] (0xc0006fa780) (1) Data frame sent\nI0216 14:44:08.617484    2775 log.go:172] (0xc000116fd0) (0xc0006fa820) Stream removed, broadcasting: 5\nI0216 14:44:08.617553    2775 log.go:172] (0xc000116fd0) (0xc0006fa780) Stream removed, broadcasting: 1\nI0216 14:44:08.617883    2775 log.go:172] (0xc000116fd0) (0xc00059c280) Stream removed, broadcasting: 3\nI0216 14:44:08.617972    2775 log.go:172] (0xc000116fd0) Go away received\nI0216 14:44:08.618081    2775 log.go:172] (0xc000116fd0) (0xc0006fa780) Stream removed, broadcasting: 1\nI0216 14:44:08.618144    2775 log.go:172] (0xc000116fd0) (0xc00059c280) Stream removed, broadcasting: 3\nI0216 14:44:08.618190    2775 log.go:172] (0xc000116fd0) (0xc0006fa820) Stream removed, broadcasting: 5\n"
Feb 16 14:44:08.634: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 16 14:44:08.634: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 16 14:44:09.327: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb 16 14:44:19.344: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 16 14:44:19.344: INFO: Waiting for statefulset status.replicas updated to 0
Feb 16 14:44:19.401: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999551s
Feb 16 14:44:20.412: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.959267945s
Feb 16 14:44:21.423: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.947472072s
Feb 16 14:44:22.437: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.936980504s
Feb 16 14:44:23.455: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.922623326s
Feb 16 14:44:24.471: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.905249183s
Feb 16 14:44:25.478: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.88941067s
Feb 16 14:44:26.496: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.881803047s
Feb 16 14:44:27.504: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.864327712s
Feb 16 14:44:28.898: INFO: Verifying statefulset ss doesn't scale past 1 for another 855.785382ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2786
Feb 16 14:44:29.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2786 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 14:44:30.440: INFO: stderr: "I0216 14:44:30.147208    2807 log.go:172] (0xc000146dc0) (0xc0003166e0) Create stream\nI0216 14:44:30.147332    2807 log.go:172] (0xc000146dc0) (0xc0003166e0) Stream added, broadcasting: 1\nI0216 14:44:30.154671    2807 log.go:172] (0xc000146dc0) Reply frame received for 1\nI0216 14:44:30.154738    2807 log.go:172] (0xc000146dc0) (0xc000918000) Create stream\nI0216 14:44:30.154764    2807 log.go:172] (0xc000146dc0) (0xc000918000) Stream added, broadcasting: 3\nI0216 14:44:30.156593    2807 log.go:172] (0xc000146dc0) Reply frame received for 3\nI0216 14:44:30.156619    2807 log.go:172] (0xc000146dc0) (0xc0009180a0) Create stream\nI0216 14:44:30.156628    2807 log.go:172] (0xc000146dc0) (0xc0009180a0) Stream added, broadcasting: 5\nI0216 14:44:30.158280    2807 log.go:172] (0xc000146dc0) Reply frame received for 5\nI0216 14:44:30.284559    2807 log.go:172] (0xc000146dc0) Data frame received for 3\nI0216 14:44:30.284639    2807 log.go:172] (0xc000918000) (3) Data frame handling\nI0216 14:44:30.284729    2807 log.go:172] (0xc000146dc0) Data frame received for 5\nI0216 14:44:30.284747    2807 log.go:172] (0xc0009180a0) (5) Data frame handling\nI0216 14:44:30.284765    2807 log.go:172] (0xc0009180a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0216 14:44:30.284848    2807 log.go:172] (0xc000918000) (3) Data frame sent\nI0216 14:44:30.421873    2807 log.go:172] (0xc000146dc0) Data frame received for 1\nI0216 14:44:30.422005    2807 log.go:172] (0xc000146dc0) (0xc000918000) Stream removed, broadcasting: 3\nI0216 14:44:30.422054    2807 log.go:172] (0xc0003166e0) (1) Data frame handling\nI0216 14:44:30.422073    2807 log.go:172] (0xc000146dc0) (0xc0009180a0) Stream removed, broadcasting: 5\nI0216 14:44:30.422113    2807 log.go:172] (0xc0003166e0) (1) Data frame sent\nI0216 14:44:30.422127    2807 log.go:172] (0xc000146dc0) (0xc0003166e0) Stream removed, broadcasting: 1\nI0216 14:44:30.422648    2807 log.go:172] (0xc000146dc0) Go away received\nI0216 14:44:30.430355    2807 log.go:172] (0xc000146dc0) (0xc0003166e0) Stream removed, broadcasting: 1\nI0216 14:44:30.430514    2807 log.go:172] (0xc000146dc0) (0xc000918000) Stream removed, broadcasting: 3\nI0216 14:44:30.430604    2807 log.go:172] (0xc000146dc0) (0xc0009180a0) Stream removed, broadcasting: 5\n"
Feb 16 14:44:30.440: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 16 14:44:30.440: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 16 14:44:30.452: INFO: Found 1 stateful pods, waiting for 3
Feb 16 14:44:42.149: INFO: Found 2 stateful pods, waiting for 3
Feb 16 14:44:50.470: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 16 14:44:50.470: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 16 14:44:50.470: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 16 14:45:00.466: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 16 14:45:00.466: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 16 14:45:00.466: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 16 14:45:10.466: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 16 14:45:10.466: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 16 14:45:10.466: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Feb 16 14:45:10.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2786 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 16 14:45:11.026: INFO: stderr: "I0216 14:45:10.670821    2829 log.go:172] (0xc0008fa370) (0xc000846640) Create stream\nI0216 14:45:10.670888    2829 log.go:172] (0xc0008fa370) (0xc000846640) Stream added, broadcasting: 1\nI0216 14:45:10.676374    2829 log.go:172] (0xc0008fa370) Reply frame received for 1\nI0216 14:45:10.676417    2829 log.go:172] (0xc0008fa370) (0xc000588140) Create stream\nI0216 14:45:10.676428    2829 log.go:172] (0xc0008fa370) (0xc000588140) Stream added, broadcasting: 3\nI0216 14:45:10.678528    2829 log.go:172] (0xc0008fa370) Reply frame received for 3\nI0216 14:45:10.678634    2829 log.go:172] (0xc0008fa370) (0xc00077c000) Create stream\nI0216 14:45:10.678655    2829 log.go:172] (0xc0008fa370) (0xc00077c000) Stream added, broadcasting: 5\nI0216 14:45:10.681254    2829 log.go:172] (0xc0008fa370) Reply frame received for 5\nI0216 14:45:10.783560    2829 log.go:172] (0xc0008fa370) Data frame received for 5\nI0216 14:45:10.783621    2829 log.go:172] (0xc00077c000) (5) Data frame handling\nI0216 14:45:10.783640    2829 log.go:172] (0xc00077c000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0216 14:45:10.783655    2829 log.go:172] (0xc0008fa370) Data frame received for 3\nI0216 14:45:10.783664    2829 log.go:172] (0xc000588140) (3) Data frame handling\nI0216 14:45:10.783673    2829 log.go:172] (0xc000588140) (3) Data frame sent\nI0216 14:45:11.018491    2829 log.go:172] (0xc0008fa370) (0xc000588140) Stream removed, broadcasting: 3\nI0216 14:45:11.018835    2829 log.go:172] (0xc0008fa370) Data frame received for 1\nI0216 14:45:11.018854    2829 log.go:172] (0xc0008fa370) (0xc00077c000) Stream removed, broadcasting: 5\nI0216 14:45:11.018876    2829 log.go:172] (0xc000846640) (1) Data frame handling\nI0216 14:45:11.018889    2829 log.go:172] (0xc000846640) (1) Data frame sent\nI0216 14:45:11.018897    2829 log.go:172] (0xc0008fa370) (0xc000846640) Stream removed, broadcasting: 1\nI0216 14:45:11.018907    2829 log.go:172] (0xc0008fa370) Go away received\nI0216 14:45:11.019304    2829 log.go:172] (0xc0008fa370) (0xc000846640) Stream removed, broadcasting: 1\nI0216 14:45:11.019363    2829 log.go:172] (0xc0008fa370) (0xc000588140) Stream removed, broadcasting: 3\nI0216 14:45:11.019373    2829 log.go:172] (0xc0008fa370) (0xc00077c000) Stream removed, broadcasting: 5\n"
Feb 16 14:45:11.026: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 16 14:45:11.026: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 16 14:45:11.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2786 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 16 14:45:11.638: INFO: stderr: "I0216 14:45:11.249607    2850 log.go:172] (0xc0003d8370) (0xc0004e66e0) Create stream\nI0216 14:45:11.249684    2850 log.go:172] (0xc0003d8370) (0xc0004e66e0) Stream added, broadcasting: 1\nI0216 14:45:11.254504    2850 log.go:172] (0xc0003d8370) Reply frame received for 1\nI0216 14:45:11.254596    2850 log.go:172] (0xc0003d8370) (0xc0003c7680) Create stream\nI0216 14:45:11.254612    2850 log.go:172] (0xc0003d8370) (0xc0003c7680) Stream added, broadcasting: 3\nI0216 14:45:11.255630    2850 log.go:172] (0xc0003d8370) Reply frame received for 3\nI0216 14:45:11.255650    2850 log.go:172] (0xc0003d8370) (0xc0004e6000) Create stream\nI0216 14:45:11.255657    2850 log.go:172] (0xc0003d8370) (0xc0004e6000) Stream added, broadcasting: 5\nI0216 14:45:11.256452    2850 log.go:172] (0xc0003d8370) Reply frame received for 5\nI0216 14:45:11.413217    2850 log.go:172] (0xc0003d8370) Data frame received for 5\nI0216 14:45:11.413257    2850 log.go:172] (0xc0004e6000) (5) Data frame handling\nI0216 14:45:11.413276    2850 log.go:172] (0xc0004e6000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0216 14:45:11.545035    2850 log.go:172] (0xc0003d8370) Data frame received for 3\nI0216 14:45:11.545123    2850 log.go:172] (0xc0003c7680) (3) Data frame handling\nI0216 14:45:11.545150    2850 log.go:172] (0xc0003c7680) (3) Data frame sent\nI0216 14:45:11.632158    2850 log.go:172] (0xc0003d8370) (0xc0003c7680) Stream removed, broadcasting: 3\nI0216 14:45:11.632294    2850 log.go:172] (0xc0003d8370) Data frame received for 1\nI0216 14:45:11.632309    2850 log.go:172] (0xc0004e66e0) (1) Data frame handling\nI0216 14:45:11.632359    2850 log.go:172] (0xc0004e66e0) (1) Data frame sent\nI0216 14:45:11.632369    2850 log.go:172] (0xc0003d8370) (0xc0004e66e0) Stream removed, broadcasting: 1\nI0216 14:45:11.632415    2850 log.go:172] (0xc0003d8370) (0xc0004e6000) Stream removed, broadcasting: 5\nI0216 14:45:11.632588    2850 log.go:172] (0xc0003d8370) Go away received\nI0216 14:45:11.632873    2850 log.go:172] (0xc0003d8370) (0xc0004e66e0) Stream removed, broadcasting: 1\nI0216 14:45:11.632905    2850 log.go:172] (0xc0003d8370) (0xc0003c7680) Stream removed, broadcasting: 3\nI0216 14:45:11.632913    2850 log.go:172] (0xc0003d8370) (0xc0004e6000) Stream removed, broadcasting: 5\n"
Feb 16 14:45:11.638: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 16 14:45:11.638: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 16 14:45:11.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2786 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 16 14:45:12.629: INFO: stderr: "I0216 14:45:11.984138    2865 log.go:172] (0xc000922370) (0xc00092e640) Create stream\nI0216 14:45:11.984240    2865 log.go:172] (0xc000922370) (0xc00092e640) Stream added, broadcasting: 1\nI0216 14:45:11.992881    2865 log.go:172] (0xc000922370) Reply frame received for 1\nI0216 14:45:11.992939    2865 log.go:172] (0xc000922370) (0xc00092e6e0) Create stream\nI0216 14:45:11.992950    2865 log.go:172] (0xc000922370) (0xc00092e6e0) Stream added, broadcasting: 3\nI0216 14:45:11.994525    2865 log.go:172] (0xc000922370) Reply frame received for 3\nI0216 14:45:11.994566    2865 log.go:172] (0xc000922370) (0xc000984000) Create stream\nI0216 14:45:11.994597    2865 log.go:172] (0xc000922370) (0xc000984000) Stream added, broadcasting: 5\nI0216 14:45:11.996219    2865 log.go:172] (0xc000922370) Reply frame received for 5\nI0216 14:45:12.202742    2865 log.go:172] (0xc000922370) Data frame received for 5\nI0216 14:45:12.202859    2865 log.go:172] (0xc000984000) (5) Data frame handling\nI0216 14:45:12.202904    2865 log.go:172] (0xc000984000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0216 14:45:12.265208    2865 log.go:172] (0xc000922370) Data frame received for 3\nI0216 14:45:12.265522    2865 log.go:172] (0xc00092e6e0) (3) Data frame handling\nI0216 14:45:12.265589    2865 log.go:172] (0xc00092e6e0) (3) Data frame sent\nI0216 14:45:12.613015    2865 log.go:172] (0xc000922370) Data frame received for 1\nI0216 14:45:12.613195    2865 log.go:172] (0xc00092e640) (1) Data frame handling\nI0216 14:45:12.613276    2865 log.go:172] (0xc00092e640) (1) Data frame sent\nI0216 14:45:12.613304    2865 log.go:172] (0xc000922370) (0xc00092e640) Stream removed, broadcasting: 1\nI0216 14:45:12.613956    2865 log.go:172] (0xc000922370) (0xc000984000) Stream removed, broadcasting: 5\nI0216 14:45:12.614060    2865 log.go:172] (0xc000922370) (0xc00092e6e0) Stream removed, broadcasting: 3\nI0216 14:45:12.614088    2865 log.go:172] (0xc000922370) Go away received\nI0216 14:45:12.614428    2865 log.go:172] (0xc000922370) (0xc00092e640) Stream removed, broadcasting: 1\nI0216 14:45:12.614450    2865 log.go:172] (0xc000922370) (0xc00092e6e0) Stream removed, broadcasting: 3\nI0216 14:45:12.614463    2865 log.go:172] (0xc000922370) (0xc000984000) Stream removed, broadcasting: 5\n"
Feb 16 14:45:12.630: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 16 14:45:12.630: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 16 14:45:12.630: INFO: Waiting for statefulset status.replicas updated to 0
Feb 16 14:45:12.666: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Feb 16 14:45:22.678: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 16 14:45:22.678: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb 16 14:45:22.678: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 16 14:45:22.706: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999493s
Feb 16 14:45:23.720: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.983485624s
Feb 16 14:45:24.731: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.969228177s
Feb 16 14:45:25.745: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.957764218s
Feb 16 14:45:26.752: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.94418406s
Feb 16 14:45:27.763: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.936842721s
Feb 16 14:45:28.775: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.925914098s
Feb 16 14:45:29.792: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.913732924s
Feb 16 14:45:30.799: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.897275019s
Feb 16 14:45:31.813: INFO: Verifying statefulset ss doesn't scale past 3 for another 889.956202ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2786
Feb 16 14:45:32.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2786 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 14:45:33.297: INFO: stderr: "I0216 14:45:33.017047    2884 log.go:172] (0xc0008cc370) (0xc000880640) Create stream\nI0216 14:45:33.017153    2884 log.go:172] (0xc0008cc370) (0xc000880640) Stream added, broadcasting: 1\nI0216 14:45:33.022584    2884 log.go:172] (0xc0008cc370) Reply frame received for 1\nI0216 14:45:33.022660    2884 log.go:172] (0xc0008cc370) (0xc0007ea000) Create stream\nI0216 14:45:33.022670    2884 log.go:172] (0xc0008cc370) (0xc0007ea000) Stream added, broadcasting: 3\nI0216 14:45:33.024388    2884 log.go:172] (0xc0008cc370) Reply frame received for 3\nI0216 14:45:33.024421    2884 log.go:172] (0xc0008cc370) (0xc000612140) Create stream\nI0216 14:45:33.024436    2884 log.go:172] (0xc0008cc370) (0xc000612140) Stream added, broadcasting: 5\nI0216 14:45:33.029712    2884 log.go:172] (0xc0008cc370) Reply frame received for 5\nI0216 14:45:33.141962    2884 log.go:172] (0xc0008cc370) Data frame received for 5\nI0216 14:45:33.142094    2884 log.go:172] (0xc000612140) (5) Data frame handling\nI0216 14:45:33.142126    2884 log.go:172] (0xc000612140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0216 14:45:33.142273    2884 log.go:172] (0xc0008cc370) Data frame received for 3\nI0216 14:45:33.142295    2884 log.go:172] (0xc0007ea000) (3) Data frame handling\nI0216 14:45:33.142306    2884 log.go:172] (0xc0007ea000) (3) Data frame sent\nI0216 14:45:33.288206    2884 log.go:172] (0xc0008cc370) Data frame received for 1\nI0216 14:45:33.288290    2884 log.go:172] (0xc000880640) (1) Data frame handling\nI0216 14:45:33.288315    2884 log.go:172] (0xc000880640) (1) Data frame sent\nI0216 14:45:33.288588    2884 log.go:172] (0xc0008cc370) (0xc000612140) Stream removed, broadcasting: 5\nI0216 14:45:33.288633    2884 log.go:172] (0xc0008cc370) (0xc000880640) Stream removed, broadcasting: 1\nI0216 14:45:33.288806    2884 log.go:172] (0xc0008cc370) (0xc0007ea000) Stream removed, broadcasting: 3\nI0216 14:45:33.288836    2884 log.go:172] (0xc0008cc370) Go away received\nI0216 14:45:33.289146    2884 log.go:172] (0xc0008cc370) (0xc000880640) Stream removed, broadcasting: 1\nI0216 14:45:33.289165    2884 log.go:172] (0xc0008cc370) (0xc0007ea000) Stream removed, broadcasting: 3\nI0216 14:45:33.289176    2884 log.go:172] (0xc0008cc370) (0xc000612140) Stream removed, broadcasting: 5\n"
Feb 16 14:45:33.297: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 16 14:45:33.297: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 16 14:45:33.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2786 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 14:45:33.638: INFO: stderr: "I0216 14:45:33.462885    2900 log.go:172] (0xc0009160b0) (0xc000a706e0) Create stream\nI0216 14:45:33.463051    2900 log.go:172] (0xc0009160b0) (0xc000a706e0) Stream added, broadcasting: 1\nI0216 14:45:33.466013    2900 log.go:172] (0xc0009160b0) Reply frame received for 1\nI0216 14:45:33.466052    2900 log.go:172] (0xc0009160b0) (0xc0008e8000) Create stream\nI0216 14:45:33.466068    2900 log.go:172] (0xc0009160b0) (0xc0008e8000) Stream added, broadcasting: 3\nI0216 14:45:33.467617    2900 log.go:172] (0xc0009160b0) Reply frame received for 3\nI0216 14:45:33.467670    2900 log.go:172] (0xc0009160b0) (0xc0008e80a0) Create stream\nI0216 14:45:33.467687    2900 log.go:172] (0xc0009160b0) (0xc0008e80a0) Stream added, broadcasting: 5\nI0216 14:45:33.468711    2900 log.go:172] (0xc0009160b0) Reply frame received for 5\nI0216 14:45:33.550521    2900 log.go:172] (0xc0009160b0) Data frame received for 5\nI0216 14:45:33.550555    2900 log.go:172] (0xc0008e80a0) (5) Data frame handling\nI0216 14:45:33.550567    2900 log.go:172] (0xc0008e80a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0216 14:45:33.551775    2900 log.go:172] (0xc0009160b0) Data frame received for 3\nI0216 14:45:33.551812    2900 log.go:172] (0xc0008e8000) (3) Data frame handling\nI0216 14:45:33.551825    2900 log.go:172] (0xc0008e8000) (3) Data frame sent\nI0216 14:45:33.626576    2900 log.go:172] (0xc0009160b0) Data frame received for 1\nI0216 14:45:33.626701    2900 log.go:172] (0xc0009160b0) (0xc0008e8000) Stream removed, broadcasting: 3\nI0216 14:45:33.626766    2900 log.go:172] (0xc000a706e0) (1) Data frame handling\nI0216 14:45:33.626806    2900 log.go:172] (0xc000a706e0) (1) Data frame sent\nI0216 14:45:33.626955    2900 log.go:172] (0xc0009160b0) (0xc0008e80a0) Stream removed, broadcasting: 5\nI0216 14:45:33.626990    2900 log.go:172] (0xc0009160b0) (0xc000a706e0) Stream removed, broadcasting: 1\nI0216 14:45:33.627011    2900 log.go:172] (0xc0009160b0) Go away received\nI0216 14:45:33.627846    2900 log.go:172] (0xc0009160b0) (0xc000a706e0) Stream removed, broadcasting: 1\nI0216 14:45:33.627872    2900 log.go:172] (0xc0009160b0) (0xc0008e8000) Stream removed, broadcasting: 3\nI0216 14:45:33.627883    2900 log.go:172] (0xc0009160b0) (0xc0008e80a0) Stream removed, broadcasting: 5\n"
Feb 16 14:45:33.638: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 16 14:45:33.638: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 16 14:45:33.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2786 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 14:45:34.463: INFO: stderr: "I0216 14:45:33.911519    2919 log.go:172] (0xc0008cc0b0) (0xc000a5e140) Create stream\nI0216 14:45:33.911727    2919 log.go:172] (0xc0008cc0b0) (0xc000a5e140) Stream added, broadcasting: 1\nI0216 14:45:33.919665    2919 log.go:172] (0xc0008cc0b0) Reply frame received for 1\nI0216 14:45:33.919734    2919 log.go:172] (0xc0008cc0b0) (0xc000a5e1e0) Create stream\nI0216 14:45:33.919742    2919 log.go:172] (0xc0008cc0b0) (0xc000a5e1e0) Stream added, broadcasting: 3\nI0216 14:45:33.921149    2919 log.go:172] (0xc0008cc0b0) Reply frame received for 3\nI0216 14:45:33.921185    2919 log.go:172] (0xc0008cc0b0) (0xc0005ca320) Create stream\nI0216 14:45:33.921200    2919 log.go:172] (0xc0008cc0b0) (0xc0005ca320) Stream added, broadcasting: 5\nI0216 14:45:33.923131    2919 log.go:172] (0xc0008cc0b0) Reply frame received for 5\nI0216 14:45:34.117001    2919 log.go:172] (0xc0008cc0b0) Data frame received for 5\nI0216 14:45:34.117613    2919 log.go:172] (0xc0008cc0b0) Data frame received for 3\nI0216 14:45:34.117665    2919 log.go:172] (0xc000a5e1e0) (3) Data frame handling\nI0216 14:45:34.117778    2919 log.go:172] (0xc000a5e1e0) (3) Data frame sent\nI0216 14:45:34.117933    2919 log.go:172] (0xc0005ca320) (5) Data frame handling\nI0216 14:45:34.118013    2919 log.go:172] (0xc0005ca320) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0216 14:45:34.443665    2919 log.go:172] (0xc0008cc0b0) Data frame received for 1\nI0216 14:45:34.443794    2919 log.go:172] (0xc000a5e140) (1) Data frame handling\nI0216 14:45:34.443859    2919 log.go:172] (0xc000a5e140) (1) Data frame sent\nI0216 14:45:34.444115    2919 log.go:172] (0xc0008cc0b0) (0xc000a5e140) Stream removed, broadcasting: 1\nI0216 14:45:34.445151    2919 log.go:172] (0xc0008cc0b0) (0xc000a5e1e0) Stream removed, broadcasting: 3\nI0216 14:45:34.445388    2919 log.go:172] (0xc0008cc0b0) (0xc0005ca320) Stream removed, broadcasting: 5\nI0216 14:45:34.445472    2919 log.go:172] (0xc0008cc0b0) (0xc000a5e140) Stream removed, broadcasting: 1\nI0216 14:45:34.445489    2919 log.go:172] (0xc0008cc0b0) (0xc000a5e1e0) Stream removed, broadcasting: 3\nI0216 14:45:34.445502    2919 log.go:172] (0xc0008cc0b0) (0xc0005ca320) Stream removed, broadcasting: 5\n"
Feb 16 14:45:34.463: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 16 14:45:34.463: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 16 14:45:34.463: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb 16 14:46:24.603: INFO: Deleting all statefulset in ns statefulset-2786
Feb 16 14:46:24.608: INFO: Scaling statefulset ss to 0
Feb 16 14:46:24.623: INFO: Waiting for statefulset status.replicas updated to 0
Feb 16 14:46:24.627: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:46:24.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2786" for this suite.
Feb 16 14:46:32.696: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:46:32.796: INFO: namespace statefulset-2786 deletion completed in 8.130644523s

• [SLOW TEST:167.793 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:46:32.796: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Feb 16 14:46:33.025: INFO: Waiting up to 5m0s for pod "client-containers-38341a04-e98f-4234-a4db-52a815966c5d" in namespace "containers-6682" to be "success or failure"
Feb 16 14:46:33.032: INFO: Pod "client-containers-38341a04-e98f-4234-a4db-52a815966c5d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.779967ms
Feb 16 14:46:36.004: INFO: Pod "client-containers-38341a04-e98f-4234-a4db-52a815966c5d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.978309648s
Feb 16 14:46:38.010: INFO: Pod "client-containers-38341a04-e98f-4234-a4db-52a815966c5d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.984387845s
Feb 16 14:46:40.050: INFO: Pod "client-containers-38341a04-e98f-4234-a4db-52a815966c5d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.024273988s
Feb 16 14:46:42.060: INFO: Pod "client-containers-38341a04-e98f-4234-a4db-52a815966c5d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.034710484s
Feb 16 14:46:44.069: INFO: Pod "client-containers-38341a04-e98f-4234-a4db-52a815966c5d": Phase="Pending", Reason="", readiness=false. Elapsed: 11.043962954s
Feb 16 14:46:46.075: INFO: Pod "client-containers-38341a04-e98f-4234-a4db-52a815966c5d": Phase="Pending", Reason="", readiness=false. Elapsed: 13.049686338s
Feb 16 14:46:48.080: INFO: Pod "client-containers-38341a04-e98f-4234-a4db-52a815966c5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.054950586s
STEP: Saw pod success
Feb 16 14:46:48.080: INFO: Pod "client-containers-38341a04-e98f-4234-a4db-52a815966c5d" satisfied condition "success or failure"
Feb 16 14:46:48.083: INFO: Trying to get logs from node iruya-node pod client-containers-38341a04-e98f-4234-a4db-52a815966c5d container test-container: 
STEP: delete the pod
Feb 16 14:46:48.131: INFO: Waiting for pod client-containers-38341a04-e98f-4234-a4db-52a815966c5d to disappear
Feb 16 14:46:48.247: INFO: Pod client-containers-38341a04-e98f-4234-a4db-52a815966c5d no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:46:48.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6682" for this suite.
Feb 16 14:46:54.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:46:54.531: INFO: namespace containers-6682 deletion completed in 6.27349707s

• [SLOW TEST:21.735 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:46:54.532: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-96da37e0-bc14-4b23-9736-c861f023fe7f
STEP: Creating a pod to test consume configMaps
Feb 16 14:46:54.775: INFO: Waiting up to 5m0s for pod "pod-configmaps-84891f6e-a8bf-4059-b489-ea32aa399e8e" in namespace "configmap-9240" to be "success or failure"
Feb 16 14:46:54.953: INFO: Pod "pod-configmaps-84891f6e-a8bf-4059-b489-ea32aa399e8e": Phase="Pending", Reason="", readiness=false. Elapsed: 178.388963ms
Feb 16 14:46:56.960: INFO: Pod "pod-configmaps-84891f6e-a8bf-4059-b489-ea32aa399e8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.184865865s
Feb 16 14:46:58.967: INFO: Pod "pod-configmaps-84891f6e-a8bf-4059-b489-ea32aa399e8e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.192219238s
Feb 16 14:47:00.972: INFO: Pod "pod-configmaps-84891f6e-a8bf-4059-b489-ea32aa399e8e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.197360565s
Feb 16 14:47:02.980: INFO: Pod "pod-configmaps-84891f6e-a8bf-4059-b489-ea32aa399e8e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.20493702s
Feb 16 14:47:04.987: INFO: Pod "pod-configmaps-84891f6e-a8bf-4059-b489-ea32aa399e8e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.211618791s
Feb 16 14:47:07.541: INFO: Pod "pod-configmaps-84891f6e-a8bf-4059-b489-ea32aa399e8e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.765890793s
Feb 16 14:47:09.563: INFO: Pod "pod-configmaps-84891f6e-a8bf-4059-b489-ea32aa399e8e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.787517929s
Feb 16 14:47:11.574: INFO: Pod "pod-configmaps-84891f6e-a8bf-4059-b489-ea32aa399e8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.79922128s
STEP: Saw pod success
Feb 16 14:47:11.574: INFO: Pod "pod-configmaps-84891f6e-a8bf-4059-b489-ea32aa399e8e" satisfied condition "success or failure"
Feb 16 14:47:11.579: INFO: Trying to get logs from node iruya-node pod pod-configmaps-84891f6e-a8bf-4059-b489-ea32aa399e8e container configmap-volume-test: 
STEP: delete the pod
Feb 16 14:47:11.806: INFO: Waiting for pod pod-configmaps-84891f6e-a8bf-4059-b489-ea32aa399e8e to disappear
Feb 16 14:47:11.828: INFO: Pod pod-configmaps-84891f6e-a8bf-4059-b489-ea32aa399e8e no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:47:11.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9240" for this suite.
Feb 16 14:47:17.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:47:18.059: INFO: namespace configmap-9240 deletion completed in 6.211140057s

• [SLOW TEST:23.527 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:47:18.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-b7c67615-7633-4172-9031-96d9d6f8707b
STEP: Creating a pod to test consume secrets
Feb 16 14:47:18.214: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4915d29c-b3c7-4a8f-b0bc-0055c220c1a8" in namespace "projected-4422" to be "success or failure"
Feb 16 14:47:18.335: INFO: Pod "pod-projected-secrets-4915d29c-b3c7-4a8f-b0bc-0055c220c1a8": Phase="Pending", Reason="", readiness=false. Elapsed: 120.330479ms
Feb 16 14:47:20.348: INFO: Pod "pod-projected-secrets-4915d29c-b3c7-4a8f-b0bc-0055c220c1a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133721807s
Feb 16 14:47:22.381: INFO: Pod "pod-projected-secrets-4915d29c-b3c7-4a8f-b0bc-0055c220c1a8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.166567078s
Feb 16 14:47:24.393: INFO: Pod "pod-projected-secrets-4915d29c-b3c7-4a8f-b0bc-0055c220c1a8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.178812225s
Feb 16 14:47:27.095: INFO: Pod "pod-projected-secrets-4915d29c-b3c7-4a8f-b0bc-0055c220c1a8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.880484853s
Feb 16 14:47:29.104: INFO: Pod "pod-projected-secrets-4915d29c-b3c7-4a8f-b0bc-0055c220c1a8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.88918947s
Feb 16 14:47:31.926: INFO: Pod "pod-projected-secrets-4915d29c-b3c7-4a8f-b0bc-0055c220c1a8": Phase="Pending", Reason="", readiness=false. Elapsed: 13.711697571s
Feb 16 14:47:34.077: INFO: Pod "pod-projected-secrets-4915d29c-b3c7-4a8f-b0bc-0055c220c1a8": Phase="Pending", Reason="", readiness=false. Elapsed: 15.862040951s
Feb 16 14:47:36.086: INFO: Pod "pod-projected-secrets-4915d29c-b3c7-4a8f-b0bc-0055c220c1a8": Phase="Pending", Reason="", readiness=false. Elapsed: 17.871730449s
Feb 16 14:47:38.094: INFO: Pod "pod-projected-secrets-4915d29c-b3c7-4a8f-b0bc-0055c220c1a8": Phase="Running", Reason="", readiness=true. Elapsed: 19.879578146s
Feb 16 14:47:40.106: INFO: Pod "pod-projected-secrets-4915d29c-b3c7-4a8f-b0bc-0055c220c1a8": Phase="Running", Reason="", readiness=true. Elapsed: 21.891416318s
Feb 16 14:47:42.115: INFO: Pod "pod-projected-secrets-4915d29c-b3c7-4a8f-b0bc-0055c220c1a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.900167228s
STEP: Saw pod success
Feb 16 14:47:42.115: INFO: Pod "pod-projected-secrets-4915d29c-b3c7-4a8f-b0bc-0055c220c1a8" satisfied condition "success or failure"
Feb 16 14:47:42.123: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-4915d29c-b3c7-4a8f-b0bc-0055c220c1a8 container projected-secret-volume-test: 
STEP: delete the pod
Feb 16 14:47:42.521: INFO: Waiting for pod pod-projected-secrets-4915d29c-b3c7-4a8f-b0bc-0055c220c1a8 to disappear
Feb 16 14:47:42.563: INFO: Pod pod-projected-secrets-4915d29c-b3c7-4a8f-b0bc-0055c220c1a8 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:47:42.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4422" for this suite.
Feb 16 14:47:48.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:47:48.799: INFO: namespace projected-4422 deletion completed in 6.225207947s

• [SLOW TEST:30.739 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:47:48.799: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-cd7edcfe-e8de-4dda-857e-e4c485bfb8e0
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:48:09.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4282" for this suite.
Feb 16 14:48:33.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:48:33.382: INFO: namespace configmap-4282 deletion completed in 24.178366789s

• [SLOW TEST:44.583 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:48:33.382: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-2947
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-2947
STEP: Deleting pre-stop pod
Feb 16 14:49:12.996: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:49:13.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-2947" for this suite.
Feb 16 14:50:01.081: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:50:01.195: INFO: namespace prestop-2947 deletion completed in 48.156724654s

• [SLOW TEST:87.813 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:50:01.196: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Feb 16 14:50:01.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9664'
Feb 16 14:50:01.935: INFO: stderr: ""
Feb 16 14:50:01.935: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Feb 16 14:50:02.942: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 14:50:02.943: INFO: Found 0 / 1
Feb 16 14:50:03.952: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 14:50:03.952: INFO: Found 0 / 1
Feb 16 14:50:04.945: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 14:50:04.945: INFO: Found 0 / 1
Feb 16 14:50:05.949: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 14:50:05.949: INFO: Found 0 / 1
Feb 16 14:50:06.950: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 14:50:06.950: INFO: Found 0 / 1
Feb 16 14:50:07.952: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 14:50:07.952: INFO: Found 0 / 1
Feb 16 14:50:08.948: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 14:50:08.948: INFO: Found 0 / 1
Feb 16 14:50:09.947: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 14:50:09.947: INFO: Found 0 / 1
Feb 16 14:50:10.942: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 14:50:10.942: INFO: Found 0 / 1
Feb 16 14:50:11.945: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 14:50:11.945: INFO: Found 0 / 1
Feb 16 14:50:12.953: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 14:50:12.953: INFO: Found 0 / 1
Feb 16 14:50:13.952: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 14:50:13.952: INFO: Found 0 / 1
Feb 16 14:50:14.945: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 14:50:14.945: INFO: Found 0 / 1
Feb 16 14:50:15.947: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 14:50:15.947: INFO: Found 1 / 1
Feb 16 14:50:15.947: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 16 14:50:15.952: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 14:50:15.952: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Feb 16 14:50:15.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7wsml redis-master --namespace=kubectl-9664'
Feb 16 14:50:16.145: INFO: stderr: ""
Feb 16 14:50:16.145: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 16 Feb 14:50:14.130 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 16 Feb 14:50:14.130 # Server started, Redis version 3.2.12\n1:M 16 Feb 14:50:14.131 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 16 Feb 14:50:14.131 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Feb 16 14:50:16.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7wsml redis-master --namespace=kubectl-9664 --tail=1'
Feb 16 14:50:16.359: INFO: stderr: ""
Feb 16 14:50:16.359: INFO: stdout: "1:M 16 Feb 14:50:14.131 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Feb 16 14:50:16.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7wsml redis-master --namespace=kubectl-9664 --limit-bytes=1'
Feb 16 14:50:16.546: INFO: stderr: ""
Feb 16 14:50:16.546: INFO: stdout: " "
STEP: exposing timestamps
Feb 16 14:50:16.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7wsml redis-master --namespace=kubectl-9664 --tail=1 --timestamps'
Feb 16 14:50:16.656: INFO: stderr: ""
Feb 16 14:50:16.656: INFO: stdout: "2020-02-16T14:50:14.134706259Z 1:M 16 Feb 14:50:14.131 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Feb 16 14:50:19.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7wsml redis-master --namespace=kubectl-9664 --since=1s'
Feb 16 14:50:19.339: INFO: stderr: ""
Feb 16 14:50:19.339: INFO: stdout: ""
Feb 16 14:50:19.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7wsml redis-master --namespace=kubectl-9664 --since=24h'
Feb 16 14:50:19.522: INFO: stderr: ""
Feb 16 14:50:19.522: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 16 Feb 14:50:14.130 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 16 Feb 14:50:14.130 # Server started, Redis version 3.2.12\n1:M 16 Feb 14:50:14.131 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 16 Feb 14:50:14.131 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Feb 16 14:50:19.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9664'
Feb 16 14:50:19.606: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 16 14:50:19.606: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Feb 16 14:50:19.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-9664'
Feb 16 14:50:19.764: INFO: stderr: "No resources found.\n"
Feb 16 14:50:19.764: INFO: stdout: ""
Feb 16 14:50:19.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-9664 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 16 14:50:19.956: INFO: stderr: ""
Feb 16 14:50:19.956: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:50:19.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9664" for this suite.
Feb 16 14:50:42.031: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:50:42.148: INFO: namespace kubectl-9664 deletion completed in 22.174370701s

• [SLOW TEST:40.952 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:50:42.149: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 16 14:50:42.341: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f2901c4d-f349-4331-a11b-348b49511c94" in namespace "downward-api-1700" to be "success or failure"
Feb 16 14:50:42.363: INFO: Pod "downwardapi-volume-f2901c4d-f349-4331-a11b-348b49511c94": Phase="Pending", Reason="", readiness=false. Elapsed: 21.824956ms
Feb 16 14:50:44.376: INFO: Pod "downwardapi-volume-f2901c4d-f349-4331-a11b-348b49511c94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034905332s
Feb 16 14:50:46.388: INFO: Pod "downwardapi-volume-f2901c4d-f349-4331-a11b-348b49511c94": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047165972s
Feb 16 14:50:48.399: INFO: Pod "downwardapi-volume-f2901c4d-f349-4331-a11b-348b49511c94": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057422105s
Feb 16 14:50:50.408: INFO: Pod "downwardapi-volume-f2901c4d-f349-4331-a11b-348b49511c94": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066946916s
Feb 16 14:50:52.533: INFO: Pod "downwardapi-volume-f2901c4d-f349-4331-a11b-348b49511c94": Phase="Pending", Reason="", readiness=false. Elapsed: 10.191956338s
Feb 16 14:50:54.541: INFO: Pod "downwardapi-volume-f2901c4d-f349-4331-a11b-348b49511c94": Phase="Pending", Reason="", readiness=false. Elapsed: 12.199524735s
Feb 16 14:50:57.960: INFO: Pod "downwardapi-volume-f2901c4d-f349-4331-a11b-348b49511c94": Phase="Pending", Reason="", readiness=false. Elapsed: 15.619094261s
Feb 16 14:50:59.972: INFO: Pod "downwardapi-volume-f2901c4d-f349-4331-a11b-348b49511c94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.630290021s
STEP: Saw pod success
Feb 16 14:50:59.972: INFO: Pod "downwardapi-volume-f2901c4d-f349-4331-a11b-348b49511c94" satisfied condition "success or failure"
Feb 16 14:50:59.977: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-f2901c4d-f349-4331-a11b-348b49511c94 container client-container: 
STEP: delete the pod
Feb 16 14:51:00.041: INFO: Waiting for pod downwardapi-volume-f2901c4d-f349-4331-a11b-348b49511c94 to disappear
Feb 16 14:51:00.054: INFO: Pod downwardapi-volume-f2901c4d-f349-4331-a11b-348b49511c94 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:51:00.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1700" for this suite.
Feb 16 14:51:06.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:51:06.455: INFO: namespace downward-api-1700 deletion completed in 6.398090838s

• [SLOW TEST:24.306 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:51:06.456: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-e771ef2c-7dcc-480e-8711-a7effc4da268
STEP: Creating a pod to test consume configMaps
Feb 16 14:51:06.653: INFO: Waiting up to 5m0s for pod "pod-configmaps-4f398299-c690-41b2-b3d4-ee64780aad52" in namespace "configmap-8707" to be "success or failure"
Feb 16 14:51:06.763: INFO: Pod "pod-configmaps-4f398299-c690-41b2-b3d4-ee64780aad52": Phase="Pending", Reason="", readiness=false. Elapsed: 109.856618ms
Feb 16 14:51:08.773: INFO: Pod "pod-configmaps-4f398299-c690-41b2-b3d4-ee64780aad52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11995176s
Feb 16 14:51:10.783: INFO: Pod "pod-configmaps-4f398299-c690-41b2-b3d4-ee64780aad52": Phase="Pending", Reason="", readiness=false. Elapsed: 4.129541445s
Feb 16 14:51:12.788: INFO: Pod "pod-configmaps-4f398299-c690-41b2-b3d4-ee64780aad52": Phase="Pending", Reason="", readiness=false. Elapsed: 6.13512438s
Feb 16 14:51:14.795: INFO: Pod "pod-configmaps-4f398299-c690-41b2-b3d4-ee64780aad52": Phase="Pending", Reason="", readiness=false. Elapsed: 8.141559347s
Feb 16 14:51:16.809: INFO: Pod "pod-configmaps-4f398299-c690-41b2-b3d4-ee64780aad52": Phase="Pending", Reason="", readiness=false. Elapsed: 10.155366405s
Feb 16 14:51:18.816: INFO: Pod "pod-configmaps-4f398299-c690-41b2-b3d4-ee64780aad52": Phase="Pending", Reason="", readiness=false. Elapsed: 12.16261665s
Feb 16 14:51:20.836: INFO: Pod "pod-configmaps-4f398299-c690-41b2-b3d4-ee64780aad52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.183009988s
STEP: Saw pod success
Feb 16 14:51:20.836: INFO: Pod "pod-configmaps-4f398299-c690-41b2-b3d4-ee64780aad52" satisfied condition "success or failure"
Feb 16 14:51:20.853: INFO: Trying to get logs from node iruya-node pod pod-configmaps-4f398299-c690-41b2-b3d4-ee64780aad52 container configmap-volume-test: 
STEP: delete the pod
Feb 16 14:51:20.999: INFO: Waiting for pod pod-configmaps-4f398299-c690-41b2-b3d4-ee64780aad52 to disappear
Feb 16 14:51:21.006: INFO: Pod pod-configmaps-4f398299-c690-41b2-b3d4-ee64780aad52 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:51:21.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8707" for this suite.
Feb 16 14:51:27.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:51:27.249: INFO: namespace configmap-8707 deletion completed in 6.237987594s

• [SLOW TEST:20.793 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:51:27.250: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Feb 16 14:51:27.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3411'
Feb 16 14:51:27.734: INFO: stderr: ""
Feb 16 14:51:27.734: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb 16 14:51:28.800: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 14:51:28.800: INFO: Found 0 / 1
Feb 16 14:51:29.743: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 14:51:29.743: INFO: Found 0 / 1
Feb 16 14:51:30.750: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 14:51:30.750: INFO: Found 0 / 1
Feb 16 14:51:31.745: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 14:51:31.745: INFO: Found 0 / 1
Feb 16 14:51:32.748: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 14:51:32.748: INFO: Found 0 / 1
Feb 16 14:51:33.746: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 14:51:33.747: INFO: Found 0 / 1
Feb 16 14:51:34.743: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 14:51:34.743: INFO: Found 0 / 1
Feb 16 14:51:35.743: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 14:51:35.744: INFO: Found 0 / 1
Feb 16 14:51:36.745: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 14:51:36.745: INFO: Found 0 / 1
Feb 16 14:51:39.070: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 14:51:39.070: INFO: Found 0 / 1
Feb 16 14:51:39.747: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 14:51:39.747: INFO: Found 0 / 1
Feb 16 14:51:40.741: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 14:51:40.741: INFO: Found 0 / 1
Feb 16 14:51:41.743: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 14:51:41.744: INFO: Found 1 / 1
Feb 16 14:51:41.744: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Feb 16 14:51:41.748: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 14:51:41.748: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 16 14:51:41.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-tl8ll --namespace=kubectl-3411 -p {"metadata":{"annotations":{"x":"y"}}}'
Feb 16 14:51:41.947: INFO: stderr: ""
Feb 16 14:51:41.947: INFO: stdout: "pod/redis-master-tl8ll patched\n"
STEP: checking annotations
Feb 16 14:51:42.007: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 14:51:42.007: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:51:42.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3411" for this suite.
Feb 16 14:52:06.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:52:06.168: INFO: namespace kubectl-3411 deletion completed in 24.15431684s

• [SLOW TEST:38.917 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:52:06.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 16 14:52:06.341: INFO: Waiting up to 5m0s for pod "pod-955161a3-fc27-40fe-9b34-1f4b3752e839" in namespace "emptydir-323" to be "success or failure"
Feb 16 14:52:06.385: INFO: Pod "pod-955161a3-fc27-40fe-9b34-1f4b3752e839": Phase="Pending", Reason="", readiness=false. Elapsed: 43.937456ms
Feb 16 14:52:08.394: INFO: Pod "pod-955161a3-fc27-40fe-9b34-1f4b3752e839": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052315467s
Feb 16 14:52:10.409: INFO: Pod "pod-955161a3-fc27-40fe-9b34-1f4b3752e839": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068192287s
Feb 16 14:52:12.422: INFO: Pod "pod-955161a3-fc27-40fe-9b34-1f4b3752e839": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081002533s
Feb 16 14:52:14.429: INFO: Pod "pod-955161a3-fc27-40fe-9b34-1f4b3752e839": Phase="Pending", Reason="", readiness=false. Elapsed: 8.088020541s
Feb 16 14:52:16.441: INFO: Pod "pod-955161a3-fc27-40fe-9b34-1f4b3752e839": Phase="Pending", Reason="", readiness=false. Elapsed: 10.099798076s
Feb 16 14:52:18.450: INFO: Pod "pod-955161a3-fc27-40fe-9b34-1f4b3752e839": Phase="Pending", Reason="", readiness=false. Elapsed: 12.108944082s
Feb 16 14:52:20.464: INFO: Pod "pod-955161a3-fc27-40fe-9b34-1f4b3752e839": Phase="Pending", Reason="", readiness=false. Elapsed: 14.122903867s
Feb 16 14:52:22.619: INFO: Pod "pod-955161a3-fc27-40fe-9b34-1f4b3752e839": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.277497757s
STEP: Saw pod success
Feb 16 14:52:22.619: INFO: Pod "pod-955161a3-fc27-40fe-9b34-1f4b3752e839" satisfied condition "success or failure"
Feb 16 14:52:22.623: INFO: Trying to get logs from node iruya-node pod pod-955161a3-fc27-40fe-9b34-1f4b3752e839 container test-container: 
STEP: delete the pod
Feb 16 14:52:22.854: INFO: Waiting for pod pod-955161a3-fc27-40fe-9b34-1f4b3752e839 to disappear
Feb 16 14:52:22.877: INFO: Pod pod-955161a3-fc27-40fe-9b34-1f4b3752e839 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:52:22.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-323" for this suite.
Feb 16 14:52:28.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:52:29.048: INFO: namespace emptydir-323 deletion completed in 6.163011242s

• [SLOW TEST:22.880 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:52:29.049: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-9d303ca6-6f72-44b7-b73b-88d3836b0c96
Feb 16 14:52:29.351: INFO: Pod name my-hostname-basic-9d303ca6-6f72-44b7-b73b-88d3836b0c96: Found 0 pods out of 1
Feb 16 14:52:34.492: INFO: Pod name my-hostname-basic-9d303ca6-6f72-44b7-b73b-88d3836b0c96: Found 1 pods out of 1
Feb 16 14:52:34.492: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-9d303ca6-6f72-44b7-b73b-88d3836b0c96" are running
Feb 16 14:52:44.511: INFO: Pod "my-hostname-basic-9d303ca6-6f72-44b7-b73b-88d3836b0c96-vrfmz" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-16 14:52:29 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-16 14:52:29 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-9d303ca6-6f72-44b7-b73b-88d3836b0c96]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-16 14:52:29 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-9d303ca6-6f72-44b7-b73b-88d3836b0c96]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-16 14:52:29 +0000 UTC Reason: Message:}])
Feb 16 14:52:44.511: INFO: Trying to dial the pod
Feb 16 14:52:49.544: INFO: Controller my-hostname-basic-9d303ca6-6f72-44b7-b73b-88d3836b0c96: Got expected result from replica 1 [my-hostname-basic-9d303ca6-6f72-44b7-b73b-88d3836b0c96-vrfmz]: "my-hostname-basic-9d303ca6-6f72-44b7-b73b-88d3836b0c96-vrfmz", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:52:49.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8887" for this suite.
Feb 16 14:52:55.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:52:55.673: INFO: namespace replication-controller-8887 deletion completed in 6.121062831s

• [SLOW TEST:26.625 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:52:55.674: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0216 14:53:21.715918       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 16 14:53:21.716: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:53:21.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9770" for this suite.
Feb 16 14:53:52.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:53:52.846: INFO: namespace gc-9770 deletion completed in 31.119876561s

• [SLOW TEST:57.173 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:53:52.847: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 16 14:54:07.583: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:54:08.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8658" for this suite.
Feb 16 14:54:14.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:54:14.577: INFO: namespace container-runtime-8658 deletion completed in 6.157895495s

• [SLOW TEST:21.730 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:54:14.577: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-e724bfce-8b24-4ab3-8e09-505bb19a7c03
STEP: Creating a pod to test consume secrets
Feb 16 14:54:14.784: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-61cadf70-2c97-476b-833d-008631030f90" in namespace "projected-960" to be "success or failure"
Feb 16 14:54:14.795: INFO: Pod "pod-projected-secrets-61cadf70-2c97-476b-833d-008631030f90": Phase="Pending", Reason="", readiness=false. Elapsed: 10.728731ms
Feb 16 14:54:16.806: INFO: Pod "pod-projected-secrets-61cadf70-2c97-476b-833d-008631030f90": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021550269s
Feb 16 14:54:18.815: INFO: Pod "pod-projected-secrets-61cadf70-2c97-476b-833d-008631030f90": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030383258s
Feb 16 14:54:20.835: INFO: Pod "pod-projected-secrets-61cadf70-2c97-476b-833d-008631030f90": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050214972s
Feb 16 14:54:23.748: INFO: Pod "pod-projected-secrets-61cadf70-2c97-476b-833d-008631030f90": Phase="Pending", Reason="", readiness=false. Elapsed: 8.963893673s
Feb 16 14:54:25.760: INFO: Pod "pod-projected-secrets-61cadf70-2c97-476b-833d-008631030f90": Phase="Pending", Reason="", readiness=false. Elapsed: 10.975535568s
Feb 16 14:54:27.773: INFO: Pod "pod-projected-secrets-61cadf70-2c97-476b-833d-008631030f90": Phase="Pending", Reason="", readiness=false. Elapsed: 12.988980011s
Feb 16 14:54:29.782: INFO: Pod "pod-projected-secrets-61cadf70-2c97-476b-833d-008631030f90": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.997279068s
STEP: Saw pod success
Feb 16 14:54:29.782: INFO: Pod "pod-projected-secrets-61cadf70-2c97-476b-833d-008631030f90" satisfied condition "success or failure"
Feb 16 14:54:29.786: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-61cadf70-2c97-476b-833d-008631030f90 container secret-volume-test: 
STEP: delete the pod
Feb 16 14:54:30.562: INFO: Waiting for pod pod-projected-secrets-61cadf70-2c97-476b-833d-008631030f90 to disappear
Feb 16 14:54:30.608: INFO: Pod pod-projected-secrets-61cadf70-2c97-476b-833d-008631030f90 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:54:30.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-960" for this suite.
Feb 16 14:54:36.776: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:54:37.006: INFO: namespace projected-960 deletion completed in 6.386127928s

• [SLOW TEST:22.429 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:54:37.008: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-9655
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 16 14:54:37.233: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 16 14:55:33.640: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9655 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 16 14:55:33.640: INFO: >>> kubeConfig: /root/.kube/config
I0216 14:55:33.733326       8 log.go:172] (0xc000ac2bb0) (0xc002d44000) Create stream
I0216 14:55:33.733404       8 log.go:172] (0xc000ac2bb0) (0xc002d44000) Stream added, broadcasting: 1
I0216 14:55:33.745344       8 log.go:172] (0xc000ac2bb0) Reply frame received for 1
I0216 14:55:33.745515       8 log.go:172] (0xc000ac2bb0) (0xc00036e000) Create stream
I0216 14:55:33.745543       8 log.go:172] (0xc000ac2bb0) (0xc00036e000) Stream added, broadcasting: 3
I0216 14:55:33.759162       8 log.go:172] (0xc000ac2bb0) Reply frame received for 3
I0216 14:55:33.759213       8 log.go:172] (0xc000ac2bb0) (0xc002020fa0) Create stream
I0216 14:55:33.759226       8 log.go:172] (0xc000ac2bb0) (0xc002020fa0) Stream added, broadcasting: 5
I0216 14:55:33.761518       8 log.go:172] (0xc000ac2bb0) Reply frame received for 5
I0216 14:55:34.023641       8 log.go:172] (0xc000ac2bb0) Data frame received for 3
I0216 14:55:34.023742       8 log.go:172] (0xc00036e000) (3) Data frame handling
I0216 14:55:34.023836       8 log.go:172] (0xc00036e000) (3) Data frame sent
I0216 14:55:34.247110       8 log.go:172] (0xc000ac2bb0) Data frame received for 1
I0216 14:55:34.247211       8 log.go:172] (0xc002d44000) (1) Data frame handling
I0216 14:55:34.247244       8 log.go:172] (0xc002d44000) (1) Data frame sent
I0216 14:55:34.247411       8 log.go:172] (0xc000ac2bb0) (0xc002d44000) Stream removed, broadcasting: 1
I0216 14:55:34.247721       8 log.go:172] (0xc000ac2bb0) (0xc002020fa0) Stream removed, broadcasting: 5
I0216 14:55:34.247912       8 log.go:172] (0xc000ac2bb0) (0xc00036e000) Stream removed, broadcasting: 3
I0216 14:55:34.247948       8 log.go:172] (0xc000ac2bb0) Go away received
I0216 14:55:34.247995       8 log.go:172] (0xc000ac2bb0) (0xc002d44000) Stream removed, broadcasting: 1
I0216 14:55:34.248011       8 log.go:172] (0xc000ac2bb0) (0xc00036e000) Stream removed, broadcasting: 3
I0216 14:55:34.248024       8 log.go:172] (0xc000ac2bb0) (0xc002020fa0) Stream removed, broadcasting: 5
Feb 16 14:55:34.248: INFO: Found all expected endpoints: [netserver-0]
Feb 16 14:55:34.256: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9655 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 16 14:55:34.256: INFO: >>> kubeConfig: /root/.kube/config
I0216 14:55:34.371392       8 log.go:172] (0xc0007d0790) (0xc002021360) Create stream
I0216 14:55:34.371532       8 log.go:172] (0xc0007d0790) (0xc002021360) Stream added, broadcasting: 1
I0216 14:55:34.385295       8 log.go:172] (0xc0007d0790) Reply frame received for 1
I0216 14:55:34.385390       8 log.go:172] (0xc0007d0790) (0xc00206a960) Create stream
I0216 14:55:34.385405       8 log.go:172] (0xc0007d0790) (0xc00206a960) Stream added, broadcasting: 3
I0216 14:55:34.392348       8 log.go:172] (0xc0007d0790) Reply frame received for 3
I0216 14:55:34.392781       8 log.go:172] (0xc0007d0790) (0xc00036e0a0) Create stream
I0216 14:55:34.392840       8 log.go:172] (0xc0007d0790) (0xc00036e0a0) Stream added, broadcasting: 5
I0216 14:55:34.397877       8 log.go:172] (0xc0007d0790) Reply frame received for 5
I0216 14:55:34.613898       8 log.go:172] (0xc0007d0790) Data frame received for 3
I0216 14:55:34.613966       8 log.go:172] (0xc00206a960) (3) Data frame handling
I0216 14:55:34.614002       8 log.go:172] (0xc00206a960) (3) Data frame sent
I0216 14:55:34.829462       8 log.go:172] (0xc0007d0790) Data frame received for 1
I0216 14:55:34.829525       8 log.go:172] (0xc0007d0790) (0xc00206a960) Stream removed, broadcasting: 3
I0216 14:55:34.829600       8 log.go:172] (0xc002021360) (1) Data frame handling
I0216 14:55:34.829644       8 log.go:172] (0xc002021360) (1) Data frame sent
I0216 14:55:34.829659       8 log.go:172] (0xc0007d0790) (0xc00036e0a0) Stream removed, broadcasting: 5
I0216 14:55:34.829702       8 log.go:172] (0xc0007d0790) (0xc002021360) Stream removed, broadcasting: 1
I0216 14:55:34.829719       8 log.go:172] (0xc0007d0790) Go away received
I0216 14:55:34.830059       8 log.go:172] (0xc0007d0790) (0xc002021360) Stream removed, broadcasting: 1
I0216 14:55:34.830073       8 log.go:172] (0xc0007d0790) (0xc00206a960) Stream removed, broadcasting: 3
I0216 14:55:34.830084       8 log.go:172] (0xc0007d0790) (0xc00036e0a0) Stream removed, broadcasting: 5
Feb 16 14:55:34.830: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:55:34.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9655" for this suite.
Feb 16 14:55:58.863: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:55:59.020: INFO: namespace pod-network-test-9655 deletion completed in 24.181037097s

• [SLOW TEST:82.012 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:55:59.020: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Feb 16 14:55:59.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1338'
Feb 16 14:56:02.351: INFO: stderr: ""
Feb 16 14:56:02.351: INFO: stdout: "pod/pause created\n"
Feb 16 14:56:02.351: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Feb 16 14:56:02.351: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1338" to be "running and ready"
Feb 16 14:56:02.457: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 106.248226ms
Feb 16 14:56:04.472: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121379051s
Feb 16 14:56:06.507: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156078196s
Feb 16 14:56:08.519: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.167859349s
Feb 16 14:56:10.529: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.178415658s
Feb 16 14:56:12.545: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 10.194428931s
Feb 16 14:56:14.560: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 12.209189292s
Feb 16 14:56:16.571: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 14.219727019s
Feb 16 14:56:16.571: INFO: Pod "pause" satisfied condition "running and ready"
Feb 16 14:56:16.571: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Feb 16 14:56:16.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-1338'
Feb 16 14:56:16.696: INFO: stderr: ""
Feb 16 14:56:16.696: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Feb 16 14:56:16.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1338'
Feb 16 14:56:16.842: INFO: stderr: ""
Feb 16 14:56:16.842: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          14s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Feb 16 14:56:16.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-1338'
Feb 16 14:56:16.960: INFO: stderr: ""
Feb 16 14:56:16.960: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Feb 16 14:56:16.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1338'
Feb 16 14:56:17.103: INFO: stderr: ""
Feb 16 14:56:17.103: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          15s   \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Feb 16 14:56:17.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1338'
Feb 16 14:56:17.326: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 16 14:56:17.326: INFO: stdout: "pod \"pause\" force deleted\n"
Feb 16 14:56:17.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-1338'
Feb 16 14:56:17.501: INFO: stderr: "No resources found.\n"
Feb 16 14:56:17.501: INFO: stdout: ""
Feb 16 14:56:17.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-1338 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 16 14:56:17.670: INFO: stderr: ""
Feb 16 14:56:17.670: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:56:17.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1338" for this suite.
Feb 16 14:56:23.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:56:23.908: INFO: namespace kubectl-1338 deletion completed in 6.230911405s

• [SLOW TEST:24.888 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:56:23.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb 16 14:56:24.424: INFO: Waiting up to 5m0s for pod "pod-7670dd12-c994-4ad3-aa66-da19c4770479" in namespace "emptydir-5499" to be "success or failure"
Feb 16 14:56:24.436: INFO: Pod "pod-7670dd12-c994-4ad3-aa66-da19c4770479": Phase="Pending", Reason="", readiness=false. Elapsed: 12.340446ms
Feb 16 14:56:26.453: INFO: Pod "pod-7670dd12-c994-4ad3-aa66-da19c4770479": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029340879s
Feb 16 14:56:28.598: INFO: Pod "pod-7670dd12-c994-4ad3-aa66-da19c4770479": Phase="Pending", Reason="", readiness=false. Elapsed: 4.174144771s
Feb 16 14:56:30.624: INFO: Pod "pod-7670dd12-c994-4ad3-aa66-da19c4770479": Phase="Pending", Reason="", readiness=false. Elapsed: 6.199954527s
Feb 16 14:56:32.673: INFO: Pod "pod-7670dd12-c994-4ad3-aa66-da19c4770479": Phase="Pending", Reason="", readiness=false. Elapsed: 8.249620022s
Feb 16 14:56:34.680: INFO: Pod "pod-7670dd12-c994-4ad3-aa66-da19c4770479": Phase="Pending", Reason="", readiness=false. Elapsed: 10.25574156s
Feb 16 14:56:36.799: INFO: Pod "pod-7670dd12-c994-4ad3-aa66-da19c4770479": Phase="Pending", Reason="", readiness=false. Elapsed: 12.375592977s
Feb 16 14:56:38.870: INFO: Pod "pod-7670dd12-c994-4ad3-aa66-da19c4770479": Phase="Pending", Reason="", readiness=false. Elapsed: 14.446512597s
Feb 16 14:56:41.256: INFO: Pod "pod-7670dd12-c994-4ad3-aa66-da19c4770479": Phase="Pending", Reason="", readiness=false. Elapsed: 16.832115597s
Feb 16 14:56:43.460: INFO: Pod "pod-7670dd12-c994-4ad3-aa66-da19c4770479": Phase="Pending", Reason="", readiness=false. Elapsed: 19.0360281s
Feb 16 14:56:45.468: INFO: Pod "pod-7670dd12-c994-4ad3-aa66-da19c4770479": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.044279522s
STEP: Saw pod success
Feb 16 14:56:45.468: INFO: Pod "pod-7670dd12-c994-4ad3-aa66-da19c4770479" satisfied condition "success or failure"
Feb 16 14:56:45.475: INFO: Trying to get logs from node iruya-node pod pod-7670dd12-c994-4ad3-aa66-da19c4770479 container test-container: 
STEP: delete the pod
Feb 16 14:56:46.096: INFO: Waiting for pod pod-7670dd12-c994-4ad3-aa66-da19c4770479 to disappear
Feb 16 14:56:46.115: INFO: Pod pod-7670dd12-c994-4ad3-aa66-da19c4770479 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:56:46.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5499" for this suite.
Feb 16 14:56:52.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:56:52.269: INFO: namespace emptydir-5499 deletion completed in 6.139952243s

• [SLOW TEST:28.360 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:56:52.269: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 16 14:56:54.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Feb 16 14:56:54.982: INFO: stderr: ""
Feb 16 14:56:54.982: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:56:54.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8605" for this suite.
Feb 16 14:57:01.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:57:01.131: INFO: namespace kubectl-8605 deletion completed in 6.135573785s

• [SLOW TEST:8.862 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:57:01.131: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 16 14:57:01.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-8190'
Feb 16 14:57:01.331: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 16 14:57:01.332: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Feb 16 14:57:01.525: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-5t87x]
Feb 16 14:57:01.525: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-5t87x" in namespace "kubectl-8190" to be "running and ready"
Feb 16 14:57:01.533: INFO: Pod "e2e-test-nginx-rc-5t87x": Phase="Pending", Reason="", readiness=false. Elapsed: 7.890147ms
Feb 16 14:57:03.553: INFO: Pod "e2e-test-nginx-rc-5t87x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028053213s
Feb 16 14:57:05.566: INFO: Pod "e2e-test-nginx-rc-5t87x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040335325s
Feb 16 14:57:07.584: INFO: Pod "e2e-test-nginx-rc-5t87x": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058765456s
Feb 16 14:57:11.600: INFO: Pod "e2e-test-nginx-rc-5t87x": Phase="Pending", Reason="", readiness=false. Elapsed: 10.074899926s
Feb 16 14:57:13.624: INFO: Pod "e2e-test-nginx-rc-5t87x": Phase="Pending", Reason="", readiness=false. Elapsed: 12.098384508s
Feb 16 14:57:15.647: INFO: Pod "e2e-test-nginx-rc-5t87x": Phase="Pending", Reason="", readiness=false. Elapsed: 14.121308344s
Feb 16 14:57:17.726: INFO: Pod "e2e-test-nginx-rc-5t87x": Phase="Running", Reason="", readiness=true. Elapsed: 16.200541459s
Feb 16 14:57:17.726: INFO: Pod "e2e-test-nginx-rc-5t87x" satisfied condition "running and ready"
Feb 16 14:57:17.726: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-5t87x]
Feb 16 14:57:17.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-8190'
Feb 16 14:57:17.934: INFO: stderr: ""
Feb 16 14:57:17.934: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Feb 16 14:57:17.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-8190'
Feb 16 14:57:18.079: INFO: stderr: ""
Feb 16 14:57:18.079: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:57:18.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8190" for this suite.
Feb 16 14:57:42.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:57:42.236: INFO: namespace kubectl-8190 deletion completed in 24.152266053s

• [SLOW TEST:41.105 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:57:42.238: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-xnxp
STEP: Creating a pod to test atomic-volume-subpath
Feb 16 14:57:42.595: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-xnxp" in namespace "subpath-4198" to be "success or failure"
Feb 16 14:57:42.601: INFO: Pod "pod-subpath-test-projected-xnxp": Phase="Pending", Reason="", readiness=false. Elapsed: 5.759273ms
Feb 16 14:57:44.639: INFO: Pod "pod-subpath-test-projected-xnxp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044095815s
Feb 16 14:57:46.650: INFO: Pod "pod-subpath-test-projected-xnxp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055638037s
Feb 16 14:57:48.666: INFO: Pod "pod-subpath-test-projected-xnxp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071443942s
Feb 16 14:57:50.673: INFO: Pod "pod-subpath-test-projected-xnxp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.078314791s
Feb 16 14:57:52.737: INFO: Pod "pod-subpath-test-projected-xnxp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.14170917s
Feb 16 14:57:55.010: INFO: Pod "pod-subpath-test-projected-xnxp": Phase="Pending", Reason="", readiness=false. Elapsed: 12.415617728s
Feb 16 14:57:57.018: INFO: Pod "pod-subpath-test-projected-xnxp": Phase="Pending", Reason="", readiness=false. Elapsed: 14.423185167s
Feb 16 14:57:59.024: INFO: Pod "pod-subpath-test-projected-xnxp": Phase="Running", Reason="", readiness=true. Elapsed: 16.42893405s
Feb 16 14:58:01.031: INFO: Pod "pod-subpath-test-projected-xnxp": Phase="Running", Reason="", readiness=true. Elapsed: 18.43582578s
Feb 16 14:58:03.038: INFO: Pod "pod-subpath-test-projected-xnxp": Phase="Running", Reason="", readiness=true. Elapsed: 20.442673566s
Feb 16 14:58:05.053: INFO: Pod "pod-subpath-test-projected-xnxp": Phase="Running", Reason="", readiness=true. Elapsed: 22.457789303s
Feb 16 14:58:07.072: INFO: Pod "pod-subpath-test-projected-xnxp": Phase="Running", Reason="", readiness=true. Elapsed: 24.477068763s
Feb 16 14:58:09.087: INFO: Pod "pod-subpath-test-projected-xnxp": Phase="Running", Reason="", readiness=true. Elapsed: 26.491665811s
Feb 16 14:58:11.102: INFO: Pod "pod-subpath-test-projected-xnxp": Phase="Running", Reason="", readiness=true. Elapsed: 28.507542663s
Feb 16 14:58:13.112: INFO: Pod "pod-subpath-test-projected-xnxp": Phase="Running", Reason="", readiness=true. Elapsed: 30.517466486s
Feb 16 14:58:15.119: INFO: Pod "pod-subpath-test-projected-xnxp": Phase="Running", Reason="", readiness=true. Elapsed: 32.523960782s
Feb 16 14:58:17.127: INFO: Pod "pod-subpath-test-projected-xnxp": Phase="Running", Reason="", readiness=true. Elapsed: 34.532153049s
Feb 16 14:58:19.181: INFO: Pod "pod-subpath-test-projected-xnxp": Phase="Running", Reason="", readiness=true. Elapsed: 36.585920434s
Feb 16 14:58:21.190: INFO: Pod "pod-subpath-test-projected-xnxp": Phase="Running", Reason="", readiness=true. Elapsed: 38.594759179s
Feb 16 14:58:23.201: INFO: Pod "pod-subpath-test-projected-xnxp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.606231979s
STEP: Saw pod success
Feb 16 14:58:23.201: INFO: Pod "pod-subpath-test-projected-xnxp" satisfied condition "success or failure"
Feb 16 14:58:23.206: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-xnxp container test-container-subpath-projected-xnxp: 
STEP: delete the pod
Feb 16 14:58:23.790: INFO: Waiting for pod pod-subpath-test-projected-xnxp to disappear
Feb 16 14:58:23.894: INFO: Pod pod-subpath-test-projected-xnxp no longer exists
STEP: Deleting pod pod-subpath-test-projected-xnxp
Feb 16 14:58:23.894: INFO: Deleting pod "pod-subpath-test-projected-xnxp" in namespace "subpath-4198"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:58:23.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4198" for this suite.
Feb 16 14:58:29.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:58:30.067: INFO: namespace subpath-4198 deletion completed in 6.159815284s

• [SLOW TEST:47.828 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:58:30.067: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 16 14:58:30.187: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c7ef56ad-a345-4f1e-bba7-dcf30f218eed" in namespace "projected-9417" to be "success or failure"
Feb 16 14:58:30.219: INFO: Pod "downwardapi-volume-c7ef56ad-a345-4f1e-bba7-dcf30f218eed": Phase="Pending", Reason="", readiness=false. Elapsed: 31.352626ms
Feb 16 14:58:32.227: INFO: Pod "downwardapi-volume-c7ef56ad-a345-4f1e-bba7-dcf30f218eed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039304635s
Feb 16 14:58:34.236: INFO: Pod "downwardapi-volume-c7ef56ad-a345-4f1e-bba7-dcf30f218eed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048438658s
Feb 16 14:58:36.252: INFO: Pod "downwardapi-volume-c7ef56ad-a345-4f1e-bba7-dcf30f218eed": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06453157s
Feb 16 14:58:38.263: INFO: Pod "downwardapi-volume-c7ef56ad-a345-4f1e-bba7-dcf30f218eed": Phase="Pending", Reason="", readiness=false. Elapsed: 8.075481504s
Feb 16 14:58:40.289: INFO: Pod "downwardapi-volume-c7ef56ad-a345-4f1e-bba7-dcf30f218eed": Phase="Pending", Reason="", readiness=false. Elapsed: 10.101791633s
Feb 16 14:58:42.734: INFO: Pod "downwardapi-volume-c7ef56ad-a345-4f1e-bba7-dcf30f218eed": Phase="Pending", Reason="", readiness=false. Elapsed: 12.546267524s
Feb 16 14:58:44.741: INFO: Pod "downwardapi-volume-c7ef56ad-a345-4f1e-bba7-dcf30f218eed": Phase="Pending", Reason="", readiness=false. Elapsed: 14.553955178s
Feb 16 14:58:46.756: INFO: Pod "downwardapi-volume-c7ef56ad-a345-4f1e-bba7-dcf30f218eed": Phase="Pending", Reason="", readiness=false. Elapsed: 16.568240078s
Feb 16 14:58:48.770: INFO: Pod "downwardapi-volume-c7ef56ad-a345-4f1e-bba7-dcf30f218eed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.583049472s
STEP: Saw pod success
Feb 16 14:58:48.770: INFO: Pod "downwardapi-volume-c7ef56ad-a345-4f1e-bba7-dcf30f218eed" satisfied condition "success or failure"
Feb 16 14:58:48.776: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-c7ef56ad-a345-4f1e-bba7-dcf30f218eed container client-container: 
STEP: delete the pod
Feb 16 14:58:48.984: INFO: Waiting for pod downwardapi-volume-c7ef56ad-a345-4f1e-bba7-dcf30f218eed to disappear
Feb 16 14:58:49.123: INFO: Pod downwardapi-volume-c7ef56ad-a345-4f1e-bba7-dcf30f218eed no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:58:49.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9417" for this suite.
Feb 16 14:58:55.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 14:58:55.272: INFO: namespace projected-9417 deletion completed in 6.13620011s

• [SLOW TEST:25.205 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 14:58:55.272: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 16 14:59:31.474: INFO: Container started at 2020-02-16 14:59:10 +0000 UTC, pod became ready at 2020-02-16 14:59:31 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 14:59:31.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8261" for this suite.
Feb 16 15:00:11.609: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 15:00:11.756: INFO: namespace container-probe-8261 deletion completed in 40.274076207s

• [SLOW TEST:76.483 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 15:00:11.757: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 16 15:00:11.906: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Feb 16 15:00:11.996: INFO: Pod name sample-pod: Found 0 pods out of 1
Feb 16 15:00:17.021: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 16 15:00:27.033: INFO: Creating deployment "test-rolling-update-deployment"
Feb 16 15:00:27.088: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Feb 16 15:00:27.097: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Feb 16 15:00:29.110: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Feb 16 15:00:29.113: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462027, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462027, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462027, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462027, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 16 15:00:32.909: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462027, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462027, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462027, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462027, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 16 15:00:33.187: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462027, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462027, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462027, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462027, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 16 15:00:35.123: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462027, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462027, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462027, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462027, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 16 15:00:37.174: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462027, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462027, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462027, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462027, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 16 15:00:39.120: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462027, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462027, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462027, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462027, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 16 15:00:41.121: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462027, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462027, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462027, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462027, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 16 15:00:43.129: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462027, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462027, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462027, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462027, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 16 15:00:45.144: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462027, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462027, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462027, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462027, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 16 15:00:47.120: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb 16 15:00:47.132: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-8038,SelfLink:/apis/apps/v1/namespaces/deployment-8038/deployments/test-rolling-update-deployment,UID:0719bc68-052e-4699-bfa3-d94d6fad9d5a,ResourceVersion:24588425,Generation:1,CreationTimestamp:2020-02-16 15:00:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-16 15:00:27 +0000 UTC 2020-02-16 15:00:27 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-16 15:00:46 +0000 UTC 2020-02-16 15:00:27 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb 16 15:00:47.136: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-8038,SelfLink:/apis/apps/v1/namespaces/deployment-8038/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:b9a233c4-0f58-4c68-8693-4cf7f9744f8f,ResourceVersion:24588414,Generation:1,CreationTimestamp:2020-02-16 15:00:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 0719bc68-052e-4699-bfa3-d94d6fad9d5a 0xc0003270b7 0xc0003270b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb 16 15:00:47.136: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Feb 16 15:00:47.136: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-8038,SelfLink:/apis/apps/v1/namespaces/deployment-8038/replicasets/test-rolling-update-controller,UID:374cdabc-f8af-4f56-ae36-246c324b068f,ResourceVersion:24588424,Generation:2,CreationTimestamp:2020-02-16 15:00:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 0719bc68-052e-4699-bfa3-d94d6fad9d5a 0xc000326987 0xc000326988}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 16 15:00:47.140: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-kfh4q" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-kfh4q,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-8038,SelfLink:/api/v1/namespaces/deployment-8038/pods/test-rolling-update-deployment-79f6b9d75c-kfh4q,UID:f79a6a55-aac0-43ec-9e4f-97471aadaec0,ResourceVersion:24588413,Generation:0,CreationTimestamp:2020-02-16 15:00:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c b9a233c4-0f58-4c68-8693-4cf7f9744f8f 0xc0027866d7 0xc0027866d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xps7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xps7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-xps7s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002786750} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002786770}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:00:27 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:00:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:00:45 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:00:27 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-16 15:00:27 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-16 15:00:44 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://a821a1af783d0dc701c9565d7b2bdd3fc03f036d6ff282666e10561b8f6a9624}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 15:00:47.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8038" for this suite.
Feb 16 15:00:55.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 15:00:55.293: INFO: namespace deployment-8038 deletion completed in 8.14867211s

• [SLOW TEST:43.537 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 15:00:55.294: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-f7872c26-8b2f-4f29-9efd-9551a357bee4 in namespace container-probe-3940
Feb 16 15:01:11.665: INFO: Started pod liveness-f7872c26-8b2f-4f29-9efd-9551a357bee4 in namespace container-probe-3940
STEP: checking the pod's current state and verifying that restartCount is present
Feb 16 15:01:11.676: INFO: Initial restart count of pod liveness-f7872c26-8b2f-4f29-9efd-9551a357bee4 is 0
Feb 16 15:01:31.067: INFO: Restart count of pod container-probe-3940/liveness-f7872c26-8b2f-4f29-9efd-9551a357bee4 is now 1 (19.391098181s elapsed)
Feb 16 15:01:45.120: INFO: Restart count of pod container-probe-3940/liveness-f7872c26-8b2f-4f29-9efd-9551a357bee4 is now 2 (33.444353751s elapsed)
Feb 16 15:02:05.205: INFO: Restart count of pod container-probe-3940/liveness-f7872c26-8b2f-4f29-9efd-9551a357bee4 is now 3 (53.529354651s elapsed)
Feb 16 15:02:25.752: INFO: Restart count of pod container-probe-3940/liveness-f7872c26-8b2f-4f29-9efd-9551a357bee4 is now 4 (1m14.075770605s elapsed)
Feb 16 15:03:30.440: INFO: Restart count of pod container-probe-3940/liveness-f7872c26-8b2f-4f29-9efd-9551a357bee4 is now 5 (2m18.764415554s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 15:03:30.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3940" for this suite.
Feb 16 15:03:36.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 15:03:36.775: INFO: namespace container-probe-3940 deletion completed in 6.273215139s

• [SLOW TEST:161.481 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 15:03:36.776: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 16 15:03:36.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-3181'
Feb 16 15:03:36.999: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 16 15:03:36.999: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: rolling-update to same image controller
Feb 16 15:03:37.080: INFO: scanned /root for discovery docs: 
Feb 16 15:03:37.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-3181'
Feb 16 15:04:00.873: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb 16 15:04:00.873: INFO: stdout: "Created e2e-test-nginx-rc-686dd3e4024547d8c8d2e47895770f46\nScaling up e2e-test-nginx-rc-686dd3e4024547d8c8d2e47895770f46 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-686dd3e4024547d8c8d2e47895770f46 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-686dd3e4024547d8c8d2e47895770f46 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Feb 16 15:04:00.873: INFO: stdout: "Created e2e-test-nginx-rc-686dd3e4024547d8c8d2e47895770f46\nScaling up e2e-test-nginx-rc-686dd3e4024547d8c8d2e47895770f46 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-686dd3e4024547d8c8d2e47895770f46 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-686dd3e4024547d8c8d2e47895770f46 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Feb 16 15:04:00.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-3181'
Feb 16 15:04:00.995: INFO: stderr: ""
Feb 16 15:04:00.995: INFO: stdout: "e2e-test-nginx-rc-686dd3e4024547d8c8d2e47895770f46-jltsn "
Feb 16 15:04:00.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-686dd3e4024547d8c8d2e47895770f46-jltsn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3181'
Feb 16 15:04:01.147: INFO: stderr: ""
Feb 16 15:04:01.147: INFO: stdout: "true"
Feb 16 15:04:01.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-686dd3e4024547d8c8d2e47895770f46-jltsn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3181'
Feb 16 15:04:01.253: INFO: stderr: ""
Feb 16 15:04:01.253: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Feb 16 15:04:01.253: INFO: e2e-test-nginx-rc-686dd3e4024547d8c8d2e47895770f46-jltsn is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Feb 16 15:04:01.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-3181'
Feb 16 15:04:01.385: INFO: stderr: ""
Feb 16 15:04:01.385: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 15:04:01.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3181" for this suite.
Feb 16 15:04:23.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 15:04:23.568: INFO: namespace kubectl-3181 deletion completed in 22.168751561s

• [SLOW TEST:46.792 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 15:04:23.569: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Feb 16 15:04:43.828: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6438 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 16 15:04:43.828: INFO: >>> kubeConfig: /root/.kube/config
I0216 15:04:43.958981       8 log.go:172] (0xc0019926e0) (0xc0025aa0a0) Create stream
I0216 15:04:43.959171       8 log.go:172] (0xc0019926e0) (0xc0025aa0a0) Stream added, broadcasting: 1
I0216 15:04:43.971890       8 log.go:172] (0xc0019926e0) Reply frame received for 1
I0216 15:04:43.972028       8 log.go:172] (0xc0019926e0) (0xc0012fce60) Create stream
I0216 15:04:43.972042       8 log.go:172] (0xc0019926e0) (0xc0012fce60) Stream added, broadcasting: 3
I0216 15:04:43.974495       8 log.go:172] (0xc0019926e0) Reply frame received for 3
I0216 15:04:43.974524       8 log.go:172] (0xc0019926e0) (0xc0025aa140) Create stream
I0216 15:04:43.974539       8 log.go:172] (0xc0019926e0) (0xc0025aa140) Stream added, broadcasting: 5
I0216 15:04:43.976192       8 log.go:172] (0xc0019926e0) Reply frame received for 5
I0216 15:04:44.169886       8 log.go:172] (0xc0019926e0) Data frame received for 3
I0216 15:04:44.169934       8 log.go:172] (0xc0012fce60) (3) Data frame handling
I0216 15:04:44.169961       8 log.go:172] (0xc0012fce60) (3) Data frame sent
I0216 15:04:44.358239       8 log.go:172] (0xc0019926e0) (0xc0012fce60) Stream removed, broadcasting: 3
I0216 15:04:44.358622       8 log.go:172] (0xc0019926e0) Data frame received for 1
I0216 15:04:44.358647       8 log.go:172] (0xc0025aa0a0) (1) Data frame handling
I0216 15:04:44.358705       8 log.go:172] (0xc0025aa0a0) (1) Data frame sent
I0216 15:04:44.359189       8 log.go:172] (0xc0019926e0) (0xc0025aa0a0) Stream removed, broadcasting: 1
I0216 15:04:44.359500       8 log.go:172] (0xc0019926e0) (0xc0025aa140) Stream removed, broadcasting: 5
I0216 15:04:44.359577       8 log.go:172] (0xc0019926e0) (0xc0025aa0a0) Stream removed, broadcasting: 1
I0216 15:04:44.359590       8 log.go:172] (0xc0019926e0) (0xc0012fce60) Stream removed, broadcasting: 3
I0216 15:04:44.359602       8 log.go:172] (0xc0019926e0) (0xc0025aa140) Stream removed, broadcasting: 5
I0216 15:04:44.360176       8 log.go:172] (0xc0019926e0) Go away received
Feb 16 15:04:44.360: INFO: Exec stderr: ""
Feb 16 15:04:44.360: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6438 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 16 15:04:44.360: INFO: >>> kubeConfig: /root/.kube/config
I0216 15:04:44.426497       8 log.go:172] (0xc000e79760) (0xc000c472c0) Create stream
I0216 15:04:44.426600       8 log.go:172] (0xc000e79760) (0xc000c472c0) Stream added, broadcasting: 1
I0216 15:04:44.433445       8 log.go:172] (0xc000e79760) Reply frame received for 1
I0216 15:04:44.433495       8 log.go:172] (0xc000e79760) (0xc00278a0a0) Create stream
I0216 15:04:44.433508       8 log.go:172] (0xc000e79760) (0xc00278a0a0) Stream added, broadcasting: 3
I0216 15:04:44.434971       8 log.go:172] (0xc000e79760) Reply frame received for 3
I0216 15:04:44.435009       8 log.go:172] (0xc000e79760) (0xc000c47400) Create stream
I0216 15:04:44.435021       8 log.go:172] (0xc000e79760) (0xc000c47400) Stream added, broadcasting: 5
I0216 15:04:44.436436       8 log.go:172] (0xc000e79760) Reply frame received for 5
I0216 15:04:44.756177       8 log.go:172] (0xc000e79760) Data frame received for 3
I0216 15:04:44.756413       8 log.go:172] (0xc00278a0a0) (3) Data frame handling
I0216 15:04:44.756481       8 log.go:172] (0xc00278a0a0) (3) Data frame sent
I0216 15:04:44.947226       8 log.go:172] (0xc000e79760) (0xc000c47400) Stream removed, broadcasting: 5
I0216 15:04:44.947332       8 log.go:172] (0xc000e79760) Data frame received for 1
I0216 15:04:44.947355       8 log.go:172] (0xc000c472c0) (1) Data frame handling
I0216 15:04:44.947371       8 log.go:172] (0xc000c472c0) (1) Data frame sent
I0216 15:04:44.947380       8 log.go:172] (0xc000e79760) (0xc000c472c0) Stream removed, broadcasting: 1
I0216 15:04:44.947717       8 log.go:172] (0xc000e79760) (0xc00278a0a0) Stream removed, broadcasting: 3
I0216 15:04:44.947796       8 log.go:172] (0xc000e79760) (0xc000c472c0) Stream removed, broadcasting: 1
I0216 15:04:44.947815       8 log.go:172] (0xc000e79760) (0xc00278a0a0) Stream removed, broadcasting: 3
I0216 15:04:44.947821       8 log.go:172] (0xc000e79760) (0xc000c47400) Stream removed, broadcasting: 5
I0216 15:04:44.948143       8 log.go:172] (0xc000e79760) Go away received
Feb 16 15:04:44.948: INFO: Exec stderr: ""
Feb 16 15:04:44.948: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6438 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 16 15:04:44.948: INFO: >>> kubeConfig: /root/.kube/config
I0216 15:04:45.003469       8 log.go:172] (0xc0019931e0) (0xc0025aa320) Create stream
I0216 15:04:45.003578       8 log.go:172] (0xc0019931e0) (0xc0025aa320) Stream added, broadcasting: 1
I0216 15:04:45.008218       8 log.go:172] (0xc0019931e0) Reply frame received for 1
I0216 15:04:45.008246       8 log.go:172] (0xc0019931e0) (0xc0013c55e0) Create stream
I0216 15:04:45.008256       8 log.go:172] (0xc0019931e0) (0xc0013c55e0) Stream added, broadcasting: 3
I0216 15:04:45.009377       8 log.go:172] (0xc0019931e0) Reply frame received for 3
I0216 15:04:45.009394       8 log.go:172] (0xc0019931e0) (0xc000c47cc0) Create stream
I0216 15:04:45.009401       8 log.go:172] (0xc0019931e0) (0xc000c47cc0) Stream added, broadcasting: 5
I0216 15:04:45.010177       8 log.go:172] (0xc0019931e0) Reply frame received for 5
I0216 15:04:45.099590       8 log.go:172] (0xc0019931e0) Data frame received for 3
I0216 15:04:45.099648       8 log.go:172] (0xc0013c55e0) (3) Data frame handling
I0216 15:04:45.099694       8 log.go:172] (0xc0013c55e0) (3) Data frame sent
I0216 15:04:45.237772       8 log.go:172] (0xc0019931e0) Data frame received for 1
I0216 15:04:45.237881       8 log.go:172] (0xc0019931e0) (0xc000c47cc0) Stream removed, broadcasting: 5
I0216 15:04:45.237948       8 log.go:172] (0xc0025aa320) (1) Data frame handling
I0216 15:04:45.237990       8 log.go:172] (0xc0025aa320) (1) Data frame sent
I0216 15:04:45.238030       8 log.go:172] (0xc0019931e0) (0xc0013c55e0) Stream removed, broadcasting: 3
I0216 15:04:45.238074       8 log.go:172] (0xc0019931e0) (0xc0025aa320) Stream removed, broadcasting: 1
I0216 15:04:45.238148       8 log.go:172] (0xc0019931e0) Go away received
I0216 15:04:45.238475       8 log.go:172] (0xc0019931e0) (0xc0025aa320) Stream removed, broadcasting: 1
I0216 15:04:45.238511       8 log.go:172] (0xc0019931e0) (0xc0013c55e0) Stream removed, broadcasting: 3
I0216 15:04:45.238521       8 log.go:172] (0xc0019931e0) (0xc000c47cc0) Stream removed, broadcasting: 5
Feb 16 15:04:45.238: INFO: Exec stderr: ""
Feb 16 15:04:45.238: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6438 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 16 15:04:45.238: INFO: >>> kubeConfig: /root/.kube/config
I0216 15:04:45.303043       8 log.go:172] (0xc0026e80b0) (0xc0013c5b80) Create stream
I0216 15:04:45.303094       8 log.go:172] (0xc0026e80b0) (0xc0013c5b80) Stream added, broadcasting: 1
I0216 15:04:45.322926       8 log.go:172] (0xc0026e80b0) Reply frame received for 1
I0216 15:04:45.322970       8 log.go:172] (0xc0026e80b0) (0xc002020000) Create stream
I0216 15:04:45.322982       8 log.go:172] (0xc0026e80b0) (0xc002020000) Stream added, broadcasting: 3
I0216 15:04:45.324589       8 log.go:172] (0xc0026e80b0) Reply frame received for 3
I0216 15:04:45.324623       8 log.go:172] (0xc0026e80b0) (0xc0020200a0) Create stream
I0216 15:04:45.324636       8 log.go:172] (0xc0026e80b0) (0xc0020200a0) Stream added, broadcasting: 5
I0216 15:04:45.325919       8 log.go:172] (0xc0026e80b0) Reply frame received for 5
I0216 15:04:45.447797       8 log.go:172] (0xc0026e80b0) Data frame received for 3
I0216 15:04:45.447919       8 log.go:172] (0xc002020000) (3) Data frame handling
I0216 15:04:45.447971       8 log.go:172] (0xc002020000) (3) Data frame sent
I0216 15:04:45.619963       8 log.go:172] (0xc0026e80b0) Data frame received for 1
I0216 15:04:45.620084       8 log.go:172] (0xc0026e80b0) (0xc002020000) Stream removed, broadcasting: 3
I0216 15:04:45.620166       8 log.go:172] (0xc0013c5b80) (1) Data frame handling
I0216 15:04:45.620186       8 log.go:172] (0xc0013c5b80) (1) Data frame sent
I0216 15:04:45.620216       8 log.go:172] (0xc0026e80b0) (0xc0020200a0) Stream removed, broadcasting: 5
I0216 15:04:45.620265       8 log.go:172] (0xc0026e80b0) (0xc0013c5b80) Stream removed, broadcasting: 1
I0216 15:04:45.620287       8 log.go:172] (0xc0026e80b0) Go away received
I0216 15:04:45.621024       8 log.go:172] (0xc0026e80b0) (0xc0013c5b80) Stream removed, broadcasting: 1
I0216 15:04:45.621051       8 log.go:172] (0xc0026e80b0) (0xc002020000) Stream removed, broadcasting: 3
I0216 15:04:45.621060       8 log.go:172] (0xc0026e80b0) (0xc0020200a0) Stream removed, broadcasting: 5
Feb 16 15:04:45.621: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Feb 16 15:04:45.621: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6438 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 16 15:04:45.621: INFO: >>> kubeConfig: /root/.kube/config
I0216 15:04:45.683961       8 log.go:172] (0xc0024d76b0) (0xc0020208c0) Create stream
I0216 15:04:45.684123       8 log.go:172] (0xc0024d76b0) (0xc0020208c0) Stream added, broadcasting: 1
I0216 15:04:45.695218       8 log.go:172] (0xc0024d76b0) Reply frame received for 1
I0216 15:04:45.695252       8 log.go:172] (0xc0024d76b0) (0xc0012fcf00) Create stream
I0216 15:04:45.695260       8 log.go:172] (0xc0024d76b0) (0xc0012fcf00) Stream added, broadcasting: 3
I0216 15:04:45.697497       8 log.go:172] (0xc0024d76b0) Reply frame received for 3
I0216 15:04:45.697530       8 log.go:172] (0xc0024d76b0) (0xc0012fcfa0) Create stream
I0216 15:04:45.697543       8 log.go:172] (0xc0024d76b0) (0xc0012fcfa0) Stream added, broadcasting: 5
I0216 15:04:45.698933       8 log.go:172] (0xc0024d76b0) Reply frame received for 5
I0216 15:04:45.793170       8 log.go:172] (0xc0024d76b0) Data frame received for 3
I0216 15:04:45.793250       8 log.go:172] (0xc0012fcf00) (3) Data frame handling
I0216 15:04:45.793267       8 log.go:172] (0xc0012fcf00) (3) Data frame sent
I0216 15:04:46.010996       8 log.go:172] (0xc0024d76b0) Data frame received for 1
I0216 15:04:46.011082       8 log.go:172] (0xc0024d76b0) (0xc0012fcf00) Stream removed, broadcasting: 3
I0216 15:04:46.011154       8 log.go:172] (0xc0020208c0) (1) Data frame handling
I0216 15:04:46.011171       8 log.go:172] (0xc0020208c0) (1) Data frame sent
I0216 15:04:46.016176       8 log.go:172] (0xc0024d76b0) (0xc0020208c0) Stream removed, broadcasting: 1
I0216 15:04:46.016882       8 log.go:172] (0xc0024d76b0) (0xc0012fcfa0) Stream removed, broadcasting: 5
I0216 15:04:46.016953       8 log.go:172] (0xc0024d76b0) Go away received
I0216 15:04:46.018087       8 log.go:172] (0xc0024d76b0) (0xc0020208c0) Stream removed, broadcasting: 1
I0216 15:04:46.018129       8 log.go:172] (0xc0024d76b0) (0xc0012fcf00) Stream removed, broadcasting: 3
I0216 15:04:46.018144       8 log.go:172] (0xc0024d76b0) (0xc0012fcfa0) Stream removed, broadcasting: 5
Feb 16 15:04:46.018: INFO: Exec stderr: ""
Feb 16 15:04:46.018: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6438 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 16 15:04:46.018: INFO: >>> kubeConfig: /root/.kube/config
I0216 15:04:46.103612       8 log.go:172] (0xc000e780b0) (0xc002a5e140) Create stream
I0216 15:04:46.103690       8 log.go:172] (0xc000e780b0) (0xc002a5e140) Stream added, broadcasting: 1
I0216 15:04:46.119340       8 log.go:172] (0xc000e780b0) Reply frame received for 1
I0216 15:04:46.119399       8 log.go:172] (0xc000e780b0) (0xc002a5e3c0) Create stream
I0216 15:04:46.119430       8 log.go:172] (0xc000e780b0) (0xc002a5e3c0) Stream added, broadcasting: 3
I0216 15:04:46.123342       8 log.go:172] (0xc000e780b0) Reply frame received for 3
I0216 15:04:46.123394       8 log.go:172] (0xc000e780b0) (0xc000c46000) Create stream
I0216 15:04:46.123412       8 log.go:172] (0xc000e780b0) (0xc000c46000) Stream added, broadcasting: 5
I0216 15:04:46.124951       8 log.go:172] (0xc000e780b0) Reply frame received for 5
I0216 15:04:46.276041       8 log.go:172] (0xc000e780b0) Data frame received for 3
I0216 15:04:46.276089       8 log.go:172] (0xc002a5e3c0) (3) Data frame handling
I0216 15:04:46.276108       8 log.go:172] (0xc002a5e3c0) (3) Data frame sent
I0216 15:04:46.408888       8 log.go:172] (0xc000e780b0) Data frame received for 1
I0216 15:04:46.408987       8 log.go:172] (0xc000e780b0) (0xc002a5e3c0) Stream removed, broadcasting: 3
I0216 15:04:46.409046       8 log.go:172] (0xc002a5e140) (1) Data frame handling
I0216 15:04:46.409080       8 log.go:172] (0xc002a5e140) (1) Data frame sent
I0216 15:04:46.409093       8 log.go:172] (0xc000e780b0) (0xc000c46000) Stream removed, broadcasting: 5
I0216 15:04:46.409134       8 log.go:172] (0xc000e780b0) (0xc002a5e140) Stream removed, broadcasting: 1
I0216 15:04:46.409164       8 log.go:172] (0xc000e780b0) Go away received
I0216 15:04:46.409454       8 log.go:172] (0xc000e780b0) (0xc002a5e140) Stream removed, broadcasting: 1
I0216 15:04:46.409473       8 log.go:172] (0xc000e780b0) (0xc002a5e3c0) Stream removed, broadcasting: 3
I0216 15:04:46.409484       8 log.go:172] (0xc000e780b0) (0xc000c46000) Stream removed, broadcasting: 5
Feb 16 15:04:46.409: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Feb 16 15:04:46.409: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6438 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 16 15:04:46.409: INFO: >>> kubeConfig: /root/.kube/config
I0216 15:04:46.526250       8 log.go:172] (0xc001d4a790) (0xc0021d4460) Create stream
I0216 15:04:46.526491       8 log.go:172] (0xc001d4a790) (0xc0021d4460) Stream added, broadcasting: 1
I0216 15:04:46.537211       8 log.go:172] (0xc001d4a790) Reply frame received for 1
I0216 15:04:46.537258       8 log.go:172] (0xc001d4a790) (0xc0021d45a0) Create stream
I0216 15:04:46.537266       8 log.go:172] (0xc001d4a790) (0xc0021d45a0) Stream added, broadcasting: 3
I0216 15:04:46.538897       8 log.go:172] (0xc001d4a790) Reply frame received for 3
I0216 15:04:46.538944       8 log.go:172] (0xc001d4a790) (0xc0013643c0) Create stream
I0216 15:04:46.538967       8 log.go:172] (0xc001d4a790) (0xc0013643c0) Stream added, broadcasting: 5
I0216 15:04:46.542504       8 log.go:172] (0xc001d4a790) Reply frame received for 5
I0216 15:04:46.698036       8 log.go:172] (0xc001d4a790) Data frame received for 3
I0216 15:04:46.698121       8 log.go:172] (0xc0021d45a0) (3) Data frame handling
I0216 15:04:46.698147       8 log.go:172] (0xc0021d45a0) (3) Data frame sent
I0216 15:04:46.817371       8 log.go:172] (0xc001d4a790) (0xc0021d45a0) Stream removed, broadcasting: 3
I0216 15:04:46.817667       8 log.go:172] (0xc001d4a790) Data frame received for 1
I0216 15:04:46.817943       8 log.go:172] (0xc001d4a790) (0xc0013643c0) Stream removed, broadcasting: 5
I0216 15:04:46.818010       8 log.go:172] (0xc0021d4460) (1) Data frame handling
I0216 15:04:46.818069       8 log.go:172] (0xc0021d4460) (1) Data frame sent
I0216 15:04:46.818103       8 log.go:172] (0xc001d4a790) (0xc0021d4460) Stream removed, broadcasting: 1
I0216 15:04:46.818138       8 log.go:172] (0xc001d4a790) Go away received
I0216 15:04:46.818592       8 log.go:172] (0xc001d4a790) (0xc0021d4460) Stream removed, broadcasting: 1
I0216 15:04:46.818653       8 log.go:172] (0xc001d4a790) (0xc0021d45a0) Stream removed, broadcasting: 3
I0216 15:04:46.818682       8 log.go:172] (0xc001d4a790) (0xc0013643c0) Stream removed, broadcasting: 5
Feb 16 15:04:46.818: INFO: Exec stderr: ""
Feb 16 15:04:46.819: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6438 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 16 15:04:46.819: INFO: >>> kubeConfig: /root/.kube/config
I0216 15:04:46.947635       8 log.go:172] (0xc0007d04d0) (0xc001998820) Create stream
I0216 15:04:46.947737       8 log.go:172] (0xc0007d04d0) (0xc001998820) Stream added, broadcasting: 1
I0216 15:04:46.961497       8 log.go:172] (0xc0007d04d0) Reply frame received for 1
I0216 15:04:46.961602       8 log.go:172] (0xc0007d04d0) (0xc0021d46e0) Create stream
I0216 15:04:46.961616       8 log.go:172] (0xc0007d04d0) (0xc0021d46e0) Stream added, broadcasting: 3
I0216 15:04:46.962896       8 log.go:172] (0xc0007d04d0) Reply frame received for 3
I0216 15:04:46.962935       8 log.go:172] (0xc0007d04d0) (0xc001998960) Create stream
I0216 15:04:46.962946       8 log.go:172] (0xc0007d04d0) (0xc001998960) Stream added, broadcasting: 5
I0216 15:04:46.965996       8 log.go:172] (0xc0007d04d0) Reply frame received for 5
I0216 15:04:47.083986       8 log.go:172] (0xc0007d04d0) Data frame received for 3
I0216 15:04:47.084051       8 log.go:172] (0xc0021d46e0) (3) Data frame handling
I0216 15:04:47.084136       8 log.go:172] (0xc0021d46e0) (3) Data frame sent
I0216 15:04:47.217104       8 log.go:172] (0xc0007d04d0) Data frame received for 1
I0216 15:04:47.217187       8 log.go:172] (0xc0007d04d0) (0xc0021d46e0) Stream removed, broadcasting: 3
I0216 15:04:47.217229       8 log.go:172] (0xc001998820) (1) Data frame handling
I0216 15:04:47.217257       8 log.go:172] (0xc0007d04d0) (0xc001998960) Stream removed, broadcasting: 5
I0216 15:04:47.217290       8 log.go:172] (0xc001998820) (1) Data frame sent
I0216 15:04:47.217309       8 log.go:172] (0xc0007d04d0) (0xc001998820) Stream removed, broadcasting: 1
I0216 15:04:47.217328       8 log.go:172] (0xc0007d04d0) Go away received
I0216 15:04:47.217729       8 log.go:172] (0xc0007d04d0) (0xc001998820) Stream removed, broadcasting: 1
I0216 15:04:47.217757       8 log.go:172] (0xc0007d04d0) (0xc0021d46e0) Stream removed, broadcasting: 3
I0216 15:04:47.217773       8 log.go:172] (0xc0007d04d0) (0xc001998960) Stream removed, broadcasting: 5
Feb 16 15:04:47.217: INFO: Exec stderr: ""
Feb 16 15:04:47.217: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6438 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 16 15:04:47.218: INFO: >>> kubeConfig: /root/.kube/config
I0216 15:04:47.297035       8 log.go:172] (0xc000461550) (0xc000c468c0) Create stream
I0216 15:04:47.297169       8 log.go:172] (0xc000461550) (0xc000c468c0) Stream added, broadcasting: 1
I0216 15:04:47.306023       8 log.go:172] (0xc000461550) Reply frame received for 1
I0216 15:04:47.306096       8 log.go:172] (0xc000461550) (0xc002a5e500) Create stream
I0216 15:04:47.306116       8 log.go:172] (0xc000461550) (0xc002a5e500) Stream added, broadcasting: 3
I0216 15:04:47.307805       8 log.go:172] (0xc000461550) Reply frame received for 3
I0216 15:04:47.307836       8 log.go:172] (0xc000461550) (0xc0021d4780) Create stream
I0216 15:04:47.307855       8 log.go:172] (0xc000461550) (0xc0021d4780) Stream added, broadcasting: 5
I0216 15:04:47.309534       8 log.go:172] (0xc000461550) Reply frame received for 5
I0216 15:04:47.404193       8 log.go:172] (0xc000461550) Data frame received for 3
I0216 15:04:47.404299       8 log.go:172] (0xc002a5e500) (3) Data frame handling
I0216 15:04:47.404336       8 log.go:172] (0xc002a5e500) (3) Data frame sent
I0216 15:04:47.543635       8 log.go:172] (0xc000461550) Data frame received for 1
I0216 15:04:47.543761       8 log.go:172] (0xc000461550) (0xc002a5e500) Stream removed, broadcasting: 3
I0216 15:04:47.543812       8 log.go:172] (0xc000c468c0) (1) Data frame handling
I0216 15:04:47.543840       8 log.go:172] (0xc000c468c0) (1) Data frame sent
I0216 15:04:47.543923       8 log.go:172] (0xc000461550) (0xc0021d4780) Stream removed, broadcasting: 5
I0216 15:04:47.543947       8 log.go:172] (0xc000461550) (0xc000c468c0) Stream removed, broadcasting: 1
I0216 15:04:47.543962       8 log.go:172] (0xc000461550) Go away received
I0216 15:04:47.544202       8 log.go:172] (0xc000461550) (0xc000c468c0) Stream removed, broadcasting: 1
I0216 15:04:47.544214       8 log.go:172] (0xc000461550) (0xc002a5e500) Stream removed, broadcasting: 3
I0216 15:04:47.544220       8 log.go:172] (0xc000461550) (0xc0021d4780) Stream removed, broadcasting: 5
Feb 16 15:04:47.544: INFO: Exec stderr: ""
Feb 16 15:04:47.544: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6438 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 16 15:04:47.544: INFO: >>> kubeConfig: /root/.kube/config
I0216 15:04:47.616744       8 log.go:172] (0xc001992000) (0xc000c47400) Create stream
I0216 15:04:47.616812       8 log.go:172] (0xc001992000) (0xc000c47400) Stream added, broadcasting: 1
I0216 15:04:47.628286       8 log.go:172] (0xc001992000) Reply frame received for 1
I0216 15:04:47.628334       8 log.go:172] (0xc001992000) (0xc0003a41e0) Create stream
I0216 15:04:47.628344       8 log.go:172] (0xc001992000) (0xc0003a41e0) Stream added, broadcasting: 3
I0216 15:04:47.629710       8 log.go:172] (0xc001992000) Reply frame received for 3
I0216 15:04:47.629734       8 log.go:172] (0xc001992000) (0xc0013645a0) Create stream
I0216 15:04:47.629741       8 log.go:172] (0xc001992000) (0xc0013645a0) Stream added, broadcasting: 5
I0216 15:04:47.630937       8 log.go:172] (0xc001992000) Reply frame received for 5
I0216 15:04:47.752063       8 log.go:172] (0xc001992000) Data frame received for 3
I0216 15:04:47.752124       8 log.go:172] (0xc0003a41e0) (3) Data frame handling
I0216 15:04:47.752150       8 log.go:172] (0xc0003a41e0) (3) Data frame sent
I0216 15:04:47.874255       8 log.go:172] (0xc001992000) Data frame received for 1
I0216 15:04:47.874595       8 log.go:172] (0xc001992000) (0xc0013645a0) Stream removed, broadcasting: 5
I0216 15:04:47.874706       8 log.go:172] (0xc000c47400) (1) Data frame handling
I0216 15:04:47.874802       8 log.go:172] (0xc001992000) (0xc0003a41e0) Stream removed, broadcasting: 3
I0216 15:04:47.874885       8 log.go:172] (0xc000c47400) (1) Data frame sent
I0216 15:04:47.874905       8 log.go:172] (0xc001992000) (0xc000c47400) Stream removed, broadcasting: 1
I0216 15:04:47.874926       8 log.go:172] (0xc001992000) Go away received
I0216 15:04:47.875416       8 log.go:172] (0xc001992000) (0xc000c47400) Stream removed, broadcasting: 1
I0216 15:04:47.875432       8 log.go:172] (0xc001992000) (0xc0003a41e0) Stream removed, broadcasting: 3
I0216 15:04:47.875447       8 log.go:172] (0xc001992000) (0xc0013645a0) Stream removed, broadcasting: 5
Feb 16 15:04:47.875: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 15:04:47.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-6438" for this suite.
Feb 16 15:05:39.925: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 15:05:40.065: INFO: namespace e2e-kubelet-etc-hosts-6438 deletion completed in 52.176924686s

• [SLOW TEST:76.497 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 15:05:40.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 16 15:05:40.240: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Feb 16 15:05:40.305: INFO: Number of nodes with available pods: 0
Feb 16 15:05:40.305: INFO: Node iruya-node is running more than one daemon pod
Feb 16 15:05:42.413: INFO: Number of nodes with available pods: 0
Feb 16 15:05:42.413: INFO: Node iruya-node is running more than one daemon pod
Feb 16 15:05:43.412: INFO: Number of nodes with available pods: 0
Feb 16 15:05:43.412: INFO: Node iruya-node is running more than one daemon pod
Feb 16 15:05:44.325: INFO: Number of nodes with available pods: 0
Feb 16 15:05:44.325: INFO: Node iruya-node is running more than one daemon pod
Feb 16 15:05:45.319: INFO: Number of nodes with available pods: 0
Feb 16 15:05:45.319: INFO: Node iruya-node is running more than one daemon pod
Feb 16 15:05:47.272: INFO: Number of nodes with available pods: 0
Feb 16 15:05:47.273: INFO: Node iruya-node is running more than one daemon pod
Feb 16 15:05:47.907: INFO: Number of nodes with available pods: 0
Feb 16 15:05:47.907: INFO: Node iruya-node is running more than one daemon pod
Feb 16 15:05:49.886: INFO: Number of nodes with available pods: 0
Feb 16 15:05:49.886: INFO: Node iruya-node is running more than one daemon pod
Feb 16 15:05:50.320: INFO: Number of nodes with available pods: 0
Feb 16 15:05:50.320: INFO: Node iruya-node is running more than one daemon pod
Feb 16 15:05:51.321: INFO: Number of nodes with available pods: 1
Feb 16 15:05:51.321: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 16 15:05:52.323: INFO: Number of nodes with available pods: 2
Feb 16 15:05:52.323: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Feb 16 15:05:52.388: INFO: Wrong image for pod: daemon-set-59ssq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 16 15:05:52.389: INFO: Wrong image for pod: daemon-set-jqm55. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 16 15:05:53.429: INFO: Wrong image for pod: daemon-set-59ssq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 16 15:05:53.429: INFO: Wrong image for pod: daemon-set-jqm55. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 16 15:05:54.480: INFO: Wrong image for pod: daemon-set-59ssq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 16 15:05:54.480: INFO: Wrong image for pod: daemon-set-jqm55. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 16 15:05:55.429: INFO: Wrong image for pod: daemon-set-59ssq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 16 15:05:55.429: INFO: Wrong image for pod: daemon-set-jqm55. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 16 15:05:56.430: INFO: Wrong image for pod: daemon-set-59ssq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 16 15:05:56.430: INFO: Wrong image for pod: daemon-set-jqm55. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 16 15:05:57.441: INFO: Wrong image for pod: daemon-set-59ssq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 16 15:05:57.442: INFO: Pod daemon-set-59ssq is not available
Feb 16 15:05:57.442: INFO: Wrong image for pod: daemon-set-jqm55. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 16 15:05:58.434: INFO: Wrong image for pod: daemon-set-jqm55. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 16 15:05:58.434: INFO: Pod daemon-set-pddts is not available
Feb 16 15:05:59.426: INFO: Wrong image for pod: daemon-set-jqm55. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 16 15:05:59.426: INFO: Pod daemon-set-pddts is not available
Feb 16 15:06:00.431: INFO: Wrong image for pod: daemon-set-jqm55. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 16 15:06:00.431: INFO: Pod daemon-set-pddts is not available
Feb 16 15:06:01.433: INFO: Wrong image for pod: daemon-set-jqm55. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 16 15:06:01.433: INFO: Pod daemon-set-pddts is not available
Feb 16 15:06:03.292: INFO: Wrong image for pod: daemon-set-jqm55. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 16 15:06:03.292: INFO: Pod daemon-set-pddts is not available
Feb 16 15:06:04.501: INFO: Wrong image for pod: daemon-set-jqm55. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 16 15:06:04.501: INFO: Pod daemon-set-pddts is not available
Feb 16 15:06:05.426: INFO: Wrong image for pod: daemon-set-jqm55. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 16 15:06:05.426: INFO: Pod daemon-set-pddts is not available
Feb 16 15:06:06.481: INFO: Wrong image for pod: daemon-set-jqm55. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 16 15:06:07.468: INFO: Wrong image for pod: daemon-set-jqm55. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 16 15:06:08.430: INFO: Wrong image for pod: daemon-set-jqm55. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 16 15:06:09.430: INFO: Wrong image for pod: daemon-set-jqm55. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 16 15:06:10.433: INFO: Wrong image for pod: daemon-set-jqm55. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 16 15:06:11.425: INFO: Wrong image for pod: daemon-set-jqm55. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 16 15:06:11.425: INFO: Pod daemon-set-jqm55 is not available
Feb 16 15:06:12.477: INFO: Wrong image for pod: daemon-set-jqm55. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 16 15:06:12.477: INFO: Pod daemon-set-jqm55 is not available
Feb 16 15:06:13.440: INFO: Wrong image for pod: daemon-set-jqm55. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 16 15:06:13.440: INFO: Pod daemon-set-jqm55 is not available
Feb 16 15:06:14.430: INFO: Wrong image for pod: daemon-set-jqm55. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 16 15:06:14.430: INFO: Pod daemon-set-jqm55 is not available
Feb 16 15:06:15.427: INFO: Wrong image for pod: daemon-set-jqm55. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 16 15:06:15.427: INFO: Pod daemon-set-jqm55 is not available
Feb 16 15:06:16.431: INFO: Wrong image for pod: daemon-set-jqm55. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 16 15:06:16.431: INFO: Pod daemon-set-jqm55 is not available
Feb 16 15:06:17.424: INFO: Pod daemon-set-bmxvr is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Feb 16 15:06:17.436: INFO: Number of nodes with available pods: 1
Feb 16 15:06:17.436: INFO: Node iruya-node is running more than one daemon pod
Feb 16 15:06:18.456: INFO: Number of nodes with available pods: 1
Feb 16 15:06:18.456: INFO: Node iruya-node is running more than one daemon pod
Feb 16 15:06:19.518: INFO: Number of nodes with available pods: 1
Feb 16 15:06:19.518: INFO: Node iruya-node is running more than one daemon pod
Feb 16 15:06:20.471: INFO: Number of nodes with available pods: 1
Feb 16 15:06:20.471: INFO: Node iruya-node is running more than one daemon pod
Feb 16 15:06:21.457: INFO: Number of nodes with available pods: 1
Feb 16 15:06:21.457: INFO: Node iruya-node is running more than one daemon pod
Feb 16 15:06:22.449: INFO: Number of nodes with available pods: 1
Feb 16 15:06:22.449: INFO: Node iruya-node is running more than one daemon pod
Feb 16 15:06:23.482: INFO: Number of nodes with available pods: 1
Feb 16 15:06:23.482: INFO: Node iruya-node is running more than one daemon pod
Feb 16 15:06:24.459: INFO: Number of nodes with available pods: 2
Feb 16 15:06:24.460: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3508, will wait for the garbage collector to delete the pods
Feb 16 15:06:24.581: INFO: Deleting DaemonSet.extensions daemon-set took: 39.282907ms
Feb 16 15:06:24.882: INFO: Terminating DaemonSet.extensions daemon-set pods took: 301.147547ms
Feb 16 15:06:36.598: INFO: Number of nodes with available pods: 0
Feb 16 15:06:36.598: INFO: Number of running nodes: 0, number of available pods: 0
Feb 16 15:06:36.602: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3508/daemonsets","resourceVersion":"24589180"},"items":null}

Feb 16 15:06:36.605: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3508/pods","resourceVersion":"24589180"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 15:06:36.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3508" for this suite.
Feb 16 15:06:42.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 15:06:42.729: INFO: namespace daemonsets-3508 deletion completed in 6.107304795s

• [SLOW TEST:62.664 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 15:06:42.729: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 15:06:51.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-4031" for this suite.
Feb 16 15:06:57.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 15:06:57.241: INFO: namespace emptydir-wrapper-4031 deletion completed in 6.181722456s

• [SLOW TEST:14.512 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 15:06:57.242: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-5297
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Feb 16 15:06:57.453: INFO: Found 0 stateful pods, waiting for 3
Feb 16 15:07:07.464: INFO: Found 2 stateful pods, waiting for 3
Feb 16 15:07:17.461: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 16 15:07:17.461: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 16 15:07:17.461: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 16 15:07:27.466: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 16 15:07:27.466: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 16 15:07:27.466: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Feb 16 15:07:27.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5297 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 16 15:07:29.906: INFO: stderr: "I0216 15:07:29.535410    3524 log.go:172] (0xc000ba0420) (0xc0007ac960) Create stream\nI0216 15:07:29.535570    3524 log.go:172] (0xc000ba0420) (0xc0007ac960) Stream added, broadcasting: 1\nI0216 15:07:29.541988    3524 log.go:172] (0xc000ba0420) Reply frame received for 1\nI0216 15:07:29.542091    3524 log.go:172] (0xc000ba0420) (0xc00074a0a0) Create stream\nI0216 15:07:29.542120    3524 log.go:172] (0xc000ba0420) (0xc00074a0a0) Stream added, broadcasting: 3\nI0216 15:07:29.543895    3524 log.go:172] (0xc000ba0420) Reply frame received for 3\nI0216 15:07:29.544016    3524 log.go:172] (0xc000ba0420) (0xc0007ca000) Create stream\nI0216 15:07:29.544051    3524 log.go:172] (0xc000ba0420) (0xc0007ca000) Stream added, broadcasting: 5\nI0216 15:07:29.546740    3524 log.go:172] (0xc000ba0420) Reply frame received for 5\nI0216 15:07:29.664287    3524 log.go:172] (0xc000ba0420) Data frame received for 5\nI0216 15:07:29.664379    3524 log.go:172] (0xc0007ca000) (5) Data frame handling\nI0216 15:07:29.664418    3524 log.go:172] (0xc0007ca000) (5) Data frame sent\nI0216 15:07:29.664435    3524 log.go:172] (0xc000ba0420) Data frame received for 5\nI0216 15:07:29.664445    3524 log.go:172] (0xc0007ca000) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0216 15:07:29.664508    3524 log.go:172] (0xc0007ca000) (5) Data frame sent\nI0216 15:07:29.736736    3524 log.go:172] (0xc000ba0420) Data frame received for 3\nI0216 15:07:29.736772    3524 log.go:172] (0xc00074a0a0) (3) Data frame handling\nI0216 15:07:29.736798    3524 log.go:172] (0xc00074a0a0) (3) Data frame sent\nI0216 15:07:29.888901    3524 log.go:172] (0xc000ba0420) (0xc00074a0a0) Stream removed, broadcasting: 3\nI0216 15:07:29.889233    3524 log.go:172] (0xc000ba0420) Data frame received for 1\nI0216 15:07:29.889258    3524 log.go:172] (0xc0007ac960) (1) Data frame handling\nI0216 15:07:29.889276    3524 log.go:172] (0xc0007ac960) (1) Data frame sent\nI0216 15:07:29.889292    3524 log.go:172] (0xc000ba0420) (0xc0007ac960) Stream removed, broadcasting: 1\nI0216 15:07:29.890143    3524 log.go:172] (0xc000ba0420) (0xc0007ca000) Stream removed, broadcasting: 5\nI0216 15:07:29.890221    3524 log.go:172] (0xc000ba0420) (0xc0007ac960) Stream removed, broadcasting: 1\nI0216 15:07:29.890815    3524 log.go:172] (0xc000ba0420) (0xc00074a0a0) Stream removed, broadcasting: 3\nI0216 15:07:29.890894    3524 log.go:172] (0xc000ba0420) (0xc0007ca000) Stream removed, broadcasting: 5\n"
Feb 16 15:07:29.906: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 16 15:07:29.906: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb 16 15:07:29.969: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Feb 16 15:07:40.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5297 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 15:07:40.408: INFO: stderr: "I0216 15:07:40.199162    3558 log.go:172] (0xc0009ec580) (0xc00067eb40) Create stream\nI0216 15:07:40.200097    3558 log.go:172] (0xc0009ec580) (0xc00067eb40) Stream added, broadcasting: 1\nI0216 15:07:40.205941    3558 log.go:172] (0xc0009ec580) Reply frame received for 1\nI0216 15:07:40.206819    3558 log.go:172] (0xc0009ec580) (0xc00081e000) Create stream\nI0216 15:07:40.207012    3558 log.go:172] (0xc0009ec580) (0xc00081e000) Stream added, broadcasting: 3\nI0216 15:07:40.216912    3558 log.go:172] (0xc0009ec580) Reply frame received for 3\nI0216 15:07:40.216971    3558 log.go:172] (0xc0009ec580) (0xc00067e280) Create stream\nI0216 15:07:40.216982    3558 log.go:172] (0xc0009ec580) (0xc00067e280) Stream added, broadcasting: 5\nI0216 15:07:40.218518    3558 log.go:172] (0xc0009ec580) Reply frame received for 5\nI0216 15:07:40.297265    3558 log.go:172] (0xc0009ec580) Data frame received for 5\nI0216 15:07:40.297408    3558 log.go:172] (0xc00067e280) (5) Data frame handling\nI0216 15:07:40.297446    3558 log.go:172] (0xc00067e280) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0216 15:07:40.297684    3558 log.go:172] (0xc0009ec580) Data frame received for 3\nI0216 15:07:40.297715    3558 log.go:172] (0xc00081e000) (3) Data frame handling\nI0216 15:07:40.297787    3558 log.go:172] (0xc00081e000) (3) Data frame sent\nI0216 15:07:40.397860    3558 log.go:172] (0xc0009ec580) Data frame received for 1\nI0216 15:07:40.397980    3558 log.go:172] (0xc0009ec580) (0xc00081e000) Stream removed, broadcasting: 3\nI0216 15:07:40.398124    3558 log.go:172] (0xc00067eb40) (1) Data frame handling\nI0216 15:07:40.398188    3558 log.go:172] (0xc00067eb40) (1) Data frame sent\nI0216 15:07:40.398217    3558 log.go:172] (0xc0009ec580) (0xc00067e280) Stream removed, broadcasting: 5\nI0216 15:07:40.398353    3558 log.go:172] (0xc0009ec580) (0xc00067eb40) Stream removed, broadcasting: 1\nI0216 15:07:40.398388    3558 log.go:172] (0xc0009ec580) Go away received\nI0216 15:07:40.399262    3558 log.go:172] (0xc0009ec580) (0xc00067eb40) Stream removed, broadcasting: 1\nI0216 15:07:40.399289    3558 log.go:172] (0xc0009ec580) (0xc00081e000) Stream removed, broadcasting: 3\nI0216 15:07:40.399299    3558 log.go:172] (0xc0009ec580) (0xc00067e280) Stream removed, broadcasting: 5\n"
Feb 16 15:07:40.408: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 16 15:07:40.408: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 16 15:07:50.436: INFO: Waiting for StatefulSet statefulset-5297/ss2 to complete update
Feb 16 15:07:50.436: INFO: Waiting for Pod statefulset-5297/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 16 15:07:50.436: INFO: Waiting for Pod statefulset-5297/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 16 15:08:00.452: INFO: Waiting for StatefulSet statefulset-5297/ss2 to complete update
Feb 16 15:08:00.452: INFO: Waiting for Pod statefulset-5297/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 16 15:08:00.452: INFO: Waiting for Pod statefulset-5297/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 16 15:08:10.460: INFO: Waiting for StatefulSet statefulset-5297/ss2 to complete update
Feb 16 15:08:10.460: INFO: Waiting for Pod statefulset-5297/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 16 15:08:20.466: INFO: Waiting for StatefulSet statefulset-5297/ss2 to complete update
Feb 16 15:08:20.466: INFO: Waiting for Pod statefulset-5297/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 16 15:08:30.453: INFO: Waiting for StatefulSet statefulset-5297/ss2 to complete update
STEP: Rolling back to a previous revision
Feb 16 15:08:40.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5297 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 16 15:08:40.945: INFO: stderr: "I0216 15:08:40.660601    3582 log.go:172] (0xc000964370) (0xc0008b2640) Create stream\nI0216 15:08:40.660865    3582 log.go:172] (0xc000964370) (0xc0008b2640) Stream added, broadcasting: 1\nI0216 15:08:40.675880    3582 log.go:172] (0xc000964370) Reply frame received for 1\nI0216 15:08:40.675934    3582 log.go:172] (0xc000964370) (0xc0004c2280) Create stream\nI0216 15:08:40.675943    3582 log.go:172] (0xc000964370) (0xc0004c2280) Stream added, broadcasting: 3\nI0216 15:08:40.677659    3582 log.go:172] (0xc000964370) Reply frame received for 3\nI0216 15:08:40.677701    3582 log.go:172] (0xc000964370) (0xc0009da0a0) Create stream\nI0216 15:08:40.677715    3582 log.go:172] (0xc000964370) (0xc0009da0a0) Stream added, broadcasting: 5\nI0216 15:08:40.679184    3582 log.go:172] (0xc000964370) Reply frame received for 5\nI0216 15:08:40.800027    3582 log.go:172] (0xc000964370) Data frame received for 5\nI0216 15:08:40.800073    3582 log.go:172] (0xc0009da0a0) (5) Data frame handling\nI0216 15:08:40.800096    3582 log.go:172] (0xc0009da0a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0216 15:08:40.843768    3582 log.go:172] (0xc000964370) Data frame received for 3\nI0216 15:08:40.843791    3582 log.go:172] (0xc0004c2280) (3) Data frame handling\nI0216 15:08:40.843813    3582 log.go:172] (0xc0004c2280) (3) Data frame sent\nI0216 15:08:40.933196    3582 log.go:172] (0xc000964370) (0xc0004c2280) Stream removed, broadcasting: 3\nI0216 15:08:40.933383    3582 log.go:172] (0xc000964370) (0xc0009da0a0) Stream removed, broadcasting: 5\nI0216 15:08:40.933429    3582 log.go:172] (0xc000964370) Data frame received for 1\nI0216 15:08:40.933445    3582 log.go:172] (0xc0008b2640) (1) Data frame handling\nI0216 15:08:40.933463    3582 log.go:172] (0xc0008b2640) (1) Data frame sent\nI0216 15:08:40.933473    3582 log.go:172] (0xc000964370) (0xc0008b2640) Stream removed, broadcasting: 1\nI0216 15:08:40.933490    3582 log.go:172] (0xc000964370) Go away received\nI0216 15:08:40.934254    3582 log.go:172] (0xc000964370) (0xc0008b2640) Stream removed, broadcasting: 1\nI0216 15:08:40.934412    3582 log.go:172] (0xc000964370) (0xc0004c2280) Stream removed, broadcasting: 3\nI0216 15:08:40.934459    3582 log.go:172] (0xc000964370) (0xc0009da0a0) Stream removed, broadcasting: 5\n"
Feb 16 15:08:40.945: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 16 15:08:40.946: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 16 15:08:51.006: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Feb 16 15:09:01.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5297 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 15:09:01.465: INFO: stderr: "I0216 15:09:01.272403    3601 log.go:172] (0xc000118bb0) (0xc00085e6e0) Create stream\nI0216 15:09:01.272847    3601 log.go:172] (0xc000118bb0) (0xc00085e6e0) Stream added, broadcasting: 1\nI0216 15:09:01.275210    3601 log.go:172] (0xc000118bb0) Reply frame received for 1\nI0216 15:09:01.275241    3601 log.go:172] (0xc000118bb0) (0xc00063a280) Create stream\nI0216 15:09:01.275249    3601 log.go:172] (0xc000118bb0) (0xc00063a280) Stream added, broadcasting: 3\nI0216 15:09:01.276320    3601 log.go:172] (0xc000118bb0) Reply frame received for 3\nI0216 15:09:01.276354    3601 log.go:172] (0xc000118bb0) (0xc00085e780) Create stream\nI0216 15:09:01.276366    3601 log.go:172] (0xc000118bb0) (0xc00085e780) Stream added, broadcasting: 5\nI0216 15:09:01.277413    3601 log.go:172] (0xc000118bb0) Reply frame received for 5\nI0216 15:09:01.365873    3601 log.go:172] (0xc000118bb0) Data frame received for 3\nI0216 15:09:01.365896    3601 log.go:172] (0xc00063a280) (3) Data frame handling\nI0216 15:09:01.365911    3601 log.go:172] (0xc00063a280) (3) Data frame sent\nI0216 15:09:01.365960    3601 log.go:172] (0xc000118bb0) Data frame received for 5\nI0216 15:09:01.365977    3601 log.go:172] (0xc00085e780) (5) Data frame handling\nI0216 15:09:01.365984    3601 log.go:172] (0xc00085e780) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0216 15:09:01.457136    3601 log.go:172] (0xc000118bb0) (0xc00063a280) Stream removed, broadcasting: 3\nI0216 15:09:01.457614    3601 log.go:172] (0xc000118bb0) Data frame received for 1\nI0216 15:09:01.457655    3601 log.go:172] (0xc00085e6e0) (1) Data frame handling\nI0216 15:09:01.457675    3601 log.go:172] (0xc00085e6e0) (1) Data frame sent\nI0216 15:09:01.457687    3601 log.go:172] (0xc000118bb0) (0xc00085e6e0) Stream removed, broadcasting: 1\nI0216 15:09:01.457897    3601 log.go:172] (0xc000118bb0) (0xc00085e780) Stream removed, broadcasting: 5\nI0216 15:09:01.457931    3601 log.go:172] (0xc000118bb0) Go away received\nI0216 15:09:01.458224    3601 log.go:172] (0xc000118bb0) (0xc00085e6e0) Stream removed, broadcasting: 1\nI0216 15:09:01.458246    3601 log.go:172] (0xc000118bb0) (0xc00063a280) Stream removed, broadcasting: 3\nI0216 15:09:01.458257    3601 log.go:172] (0xc000118bb0) (0xc00085e780) Stream removed, broadcasting: 5\n"
Feb 16 15:09:01.465: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 16 15:09:01.465: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 16 15:09:11.509: INFO: Waiting for StatefulSet statefulset-5297/ss2 to complete update
Feb 16 15:09:11.509: INFO: Waiting for Pod statefulset-5297/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 16 15:09:11.509: INFO: Waiting for Pod statefulset-5297/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 16 15:09:11.509: INFO: Waiting for Pod statefulset-5297/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 16 15:09:21.530: INFO: Waiting for StatefulSet statefulset-5297/ss2 to complete update
Feb 16 15:09:21.530: INFO: Waiting for Pod statefulset-5297/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 16 15:09:21.530: INFO: Waiting for Pod statefulset-5297/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 16 15:09:31.525: INFO: Waiting for StatefulSet statefulset-5297/ss2 to complete update
Feb 16 15:09:31.525: INFO: Waiting for Pod statefulset-5297/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 16 15:09:31.525: INFO: Waiting for Pod statefulset-5297/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 16 15:09:41.535: INFO: Waiting for StatefulSet statefulset-5297/ss2 to complete update
Feb 16 15:09:41.535: INFO: Waiting for Pod statefulset-5297/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 16 15:09:51.522: INFO: Waiting for StatefulSet statefulset-5297/ss2 to complete update
Feb 16 15:09:51.522: INFO: Waiting for Pod statefulset-5297/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 16 15:10:01.525: INFO: Waiting for StatefulSet statefulset-5297/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb 16 15:10:11.523: INFO: Deleting all statefulset in ns statefulset-5297
Feb 16 15:10:11.528: INFO: Scaling statefulset ss2 to 0
Feb 16 15:10:51.578: INFO: Waiting for statefulset status.replicas updated to 0
Feb 16 15:10:51.591: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 15:10:51.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5297" for this suite.
Feb 16 15:10:59.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 15:10:59.883: INFO: namespace statefulset-5297 deletion completed in 8.190271603s

• [SLOW TEST:242.641 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 15:10:59.883: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 16 15:11:00.103: INFO: Creating deployment "nginx-deployment"
Feb 16 15:11:00.241: INFO: Waiting for observed generation 1
Feb 16 15:11:04.077: INFO: Waiting for all required pods to come up
Feb 16 15:11:04.795: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Feb 16 15:11:33.006: INFO: Waiting for deployment "nginx-deployment" to complete
Feb 16 15:11:33.013: INFO: Updating deployment "nginx-deployment" with a non-existent image
Feb 16 15:11:33.027: INFO: Updating deployment nginx-deployment
Feb 16 15:11:33.027: INFO: Waiting for observed generation 2
Feb 16 15:11:36.663: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Feb 16 15:11:36.696: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Feb 16 15:11:37.437: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb 16 15:11:38.091: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Feb 16 15:11:38.091: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Feb 16 15:11:38.855: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb 16 15:11:39.246: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Feb 16 15:11:39.246: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Feb 16 15:11:39.623: INFO: Updating deployment nginx-deployment
Feb 16 15:11:39.624: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Feb 16 15:11:39.857: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Feb 16 15:11:40.070: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb 16 15:11:46.366: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-836,SelfLink:/apis/apps/v1/namespaces/deployment-836/deployments/nginx-deployment,UID:0af5b3c1-7f50-457a-b0f3-791478357b05,ResourceVersion:24590225,Generation:3,CreationTimestamp:2020-02-16 15:11:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-02-16 15:11:39 +0000 UTC 2020-02-16 15:11:39 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-16 15:11:42 +0000 UTC 2020-02-16 15:11:00 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},}

Feb 16 15:11:48.540: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-836,SelfLink:/apis/apps/v1/namespaces/deployment-836/replicasets/nginx-deployment-55fb7cb77f,UID:7e936312-a0b6-4903-8bf4-dec62c879646,ResourceVersion:24590212,Generation:3,CreationTimestamp:2020-02-16 15:11:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 0af5b3c1-7f50-457a-b0f3-791478357b05 0xc002e69c87 0xc002e69c88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 16 15:11:48.540: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Feb 16 15:11:48.541: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-836,SelfLink:/apis/apps/v1/namespaces/deployment-836/replicasets/nginx-deployment-7b8c6f4498,UID:7647612a-0a69-471a-b9a1-ebbc6707d906,ResourceVersion:24590220,Generation:3,CreationTimestamp:2020-02-16 15:11:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 0af5b3c1-7f50-457a-b0f3-791478357b05 0xc002e69d57 0xc002e69d58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Feb 16 15:11:51.272: INFO: Pod "nginx-deployment-55fb7cb77f-5pcss" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5pcss,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-836,SelfLink:/api/v1/namespaces/deployment-836/pods/nginx-deployment-55fb7cb77f-5pcss,UID:326fdff1-ed76-453d-ada1-13629bfe6dc2,ResourceVersion:24590221,Generation:0,CreationTimestamp:2020-02-16 15:11:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 7e936312-a0b6-4903-8bf4-dec62c879646 0xc0028bd2f7 0xc0028bd2f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxz58 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxz58,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hxz58 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028bd360} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028bd380}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:39 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-16 15:11:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 15:11:51.273: INFO: Pod "nginx-deployment-55fb7cb77f-6vtdq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6vtdq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-836,SelfLink:/api/v1/namespaces/deployment-836/pods/nginx-deployment-55fb7cb77f-6vtdq,UID:edb08a60-d9bc-47bd-8d2f-0fe5a116b868,ResourceVersion:24590231,Generation:0,CreationTimestamp:2020-02-16 15:11:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 7e936312-a0b6-4903-8bf4-dec62c879646 0xc0028bd457 0xc0028bd458}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxz58 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxz58,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hxz58 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028bd4c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028bd4e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:40 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-16 15:11:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 15:11:51.273: INFO: Pod "nginx-deployment-55fb7cb77f-77lrl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-77lrl,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-836,SelfLink:/api/v1/namespaces/deployment-836/pods/nginx-deployment-55fb7cb77f-77lrl,UID:70afc6cc-8ad4-4288-b784-6a14ec060f32,ResourceVersion:24590138,Generation:0,CreationTimestamp:2020-02-16 15:11:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 7e936312-a0b6-4903-8bf4-dec62c879646 0xc0028bd5b7 0xc0028bd5b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxz58 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxz58,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hxz58 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028bd630} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028bd650}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:33 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-16 15:11:33 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 15:11:51.274: INFO: Pod "nginx-deployment-55fb7cb77f-fzvz8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-fzvz8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-836,SelfLink:/api/v1/namespaces/deployment-836/pods/nginx-deployment-55fb7cb77f-fzvz8,UID:4b1fb6af-c177-46b7-9d30-e14fbc8eb626,ResourceVersion:24590126,Generation:0,CreationTimestamp:2020-02-16 15:11:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 7e936312-a0b6-4903-8bf4-dec62c879646 0xc0028bd727 0xc0028bd728}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxz58 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxz58,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hxz58 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028bd7a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028bd7c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:33 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-16 15:11:33 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 15:11:51.274: INFO: Pod "nginx-deployment-55fb7cb77f-m8xzv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-m8xzv,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-836,SelfLink:/api/v1/namespaces/deployment-836/pods/nginx-deployment-55fb7cb77f-m8xzv,UID:62543503-3f70-4179-8742-4474857c3303,ResourceVersion:24590146,Generation:0,CreationTimestamp:2020-02-16 15:11:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 7e936312-a0b6-4903-8bf4-dec62c879646 0xc0028bd8a7 0xc0028bd8a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxz58 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxz58,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hxz58 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028bd910} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028bd930}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:34 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-16 15:11:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 15:11:51.274: INFO: Pod "nginx-deployment-55fb7cb77f-nv957" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-nv957,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-836,SelfLink:/api/v1/namespaces/deployment-836/pods/nginx-deployment-55fb7cb77f-nv957,UID:ce6fb277-24ca-4bcc-9ef3-c29b8f748877,ResourceVersion:24590238,Generation:0,CreationTimestamp:2020-02-16 15:11:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 7e936312-a0b6-4903-8bf4-dec62c879646 0xc0028bda07 0xc0028bda08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxz58 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxz58,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hxz58 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028bda80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028bdaa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:40 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-16 15:11:42 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 15:11:51.275: INFO: Pod "nginx-deployment-55fb7cb77f-pzdzw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-pzdzw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-836,SelfLink:/api/v1/namespaces/deployment-836/pods/nginx-deployment-55fb7cb77f-pzdzw,UID:da204668-d742-40c6-9f84-735ca279061e,ResourceVersion:24590124,Generation:0,CreationTimestamp:2020-02-16 15:11:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 7e936312-a0b6-4903-8bf4-dec62c879646 0xc0028bdb77 0xc0028bdb78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxz58 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxz58,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hxz58 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028bdbe0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028bdc10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:33 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-16 15:11:33 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 15:11:51.275: INFO: Pod "nginx-deployment-55fb7cb77f-sz58v" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-sz58v,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-836,SelfLink:/api/v1/namespaces/deployment-836/pods/nginx-deployment-55fb7cb77f-sz58v,UID:bc6bee68-a6bd-49c7-a549-1ea159d6befb,ResourceVersion:24590195,Generation:0,CreationTimestamp:2020-02-16 15:11:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 7e936312-a0b6-4903-8bf4-dec62c879646 0xc0028bdce7 0xc0028bdce8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxz58 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxz58,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hxz58 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028bdd50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028bdd70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 15:11:51.276: INFO: Pod "nginx-deployment-55fb7cb77f-tdpl7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-tdpl7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-836,SelfLink:/api/v1/namespaces/deployment-836/pods/nginx-deployment-55fb7cb77f-tdpl7,UID:5a1e0fc3-b4e7-4294-8a4e-a1e4fba11a34,ResourceVersion:24590201,Generation:0,CreationTimestamp:2020-02-16 15:11:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 7e936312-a0b6-4903-8bf4-dec62c879646 0xc0028bddf7 0xc0028bddf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxz58 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxz58,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hxz58 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028bde80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028bdea0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 15:11:51.276: INFO: Pod "nginx-deployment-55fb7cb77f-wgjx7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-wgjx7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-836,SelfLink:/api/v1/namespaces/deployment-836/pods/nginx-deployment-55fb7cb77f-wgjx7,UID:2cd78bb1-0346-4eb7-a187-80ed006318ef,ResourceVersion:24590197,Generation:0,CreationTimestamp:2020-02-16 15:11:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 7e936312-a0b6-4903-8bf4-dec62c879646 0xc0028bdf27 0xc0028bdf28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxz58 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxz58,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hxz58 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028bdfa0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028bdfc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 15:11:51.277: INFO: Pod "nginx-deployment-55fb7cb77f-x4sjq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-x4sjq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-836,SelfLink:/api/v1/namespaces/deployment-836/pods/nginx-deployment-55fb7cb77f-x4sjq,UID:203a972f-bed0-436c-9b9a-508cf32e6350,ResourceVersion:24590151,Generation:0,CreationTimestamp:2020-02-16 15:11:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 7e936312-a0b6-4903-8bf4-dec62c879646 0xc0028fa047 0xc0028fa048}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxz58 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxz58,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hxz58 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028fa0e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028fa100}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:35 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-16 15:11:37 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 15:11:51.277: INFO: Pod "nginx-deployment-55fb7cb77f-xkzgk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-xkzgk,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-836,SelfLink:/api/v1/namespaces/deployment-836/pods/nginx-deployment-55fb7cb77f-xkzgk,UID:21edd30f-a6df-4257-b5b9-ba169b306690,ResourceVersion:24590198,Generation:0,CreationTimestamp:2020-02-16 15:11:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 7e936312-a0b6-4903-8bf4-dec62c879646 0xc0028fa1f7 0xc0028fa1f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxz58 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxz58,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hxz58 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028fa270} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028fa290}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 15:11:51.277: INFO: Pod "nginx-deployment-55fb7cb77f-xsw7v" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-xsw7v,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-836,SelfLink:/api/v1/namespaces/deployment-836/pods/nginx-deployment-55fb7cb77f-xsw7v,UID:3a2a8ed6-261b-4d83-9713-873468e59966,ResourceVersion:24590206,Generation:0,CreationTimestamp:2020-02-16 15:11:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 7e936312-a0b6-4903-8bf4-dec62c879646 0xc0028fa317 0xc0028fa318}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxz58 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxz58,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hxz58 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028fa390} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028fa3b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 15:11:51.278: INFO: Pod "nginx-deployment-7b8c6f4498-2gsmp" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2gsmp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-836,SelfLink:/api/v1/namespaces/deployment-836/pods/nginx-deployment-7b8c6f4498-2gsmp,UID:382ec5db-8647-477f-811d-db8f761b068c,ResourceVersion:24590087,Generation:0,CreationTimestamp:2020-02-16 15:11:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7647612a-0a69-471a-b9a1-ebbc6707d906 0xc0028fa437 0xc0028fa438}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxz58 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxz58,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hxz58 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028fa4b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028fa4d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:04 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:32 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:01 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2020-02-16 15:11:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-16 15:11:30 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://626790889ce77dc42f237b510880aa028389096b53f56e0baa908005cb8cb8c8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 15:11:51.278: INFO: Pod "nginx-deployment-7b8c6f4498-2s9bn" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2s9bn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-836,SelfLink:/api/v1/namespaces/deployment-836/pods/nginx-deployment-7b8c6f4498-2s9bn,UID:b2a11423-4ab7-4736-85dc-c5a384a6eec5,ResourceVersion:24590046,Generation:0,CreationTimestamp:2020-02-16 15:11:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7647612a-0a69-471a-b9a1-ebbc6707d906 0xc0028fa5a7 0xc0028fa5a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxz58 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxz58,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hxz58 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028fa610} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028fa630}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:01 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:26 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:26 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:01 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.5,StartTime:2020-02-16 15:11:01 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-16 15:11:25 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://96b3904e70740a86f4f901d4f658e2a110053b6b016d2bd590fd774bbe6ba505}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 15:11:51.278: INFO: Pod "nginx-deployment-7b8c6f4498-5s94k" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5s94k,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-836,SelfLink:/api/v1/namespaces/deployment-836/pods/nginx-deployment-7b8c6f4498-5s94k,UID:ed3668fb-2d5f-4dac-bec5-2d5c4b11a315,ResourceVersion:24590199,Generation:0,CreationTimestamp:2020-02-16 15:11:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7647612a-0a69-471a-b9a1-ebbc6707d906 0xc0028fa707 0xc0028fa708}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxz58 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxz58,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hxz58 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028fa780} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028fa7a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 15:11:51.279: INFO: Pod "nginx-deployment-7b8c6f4498-6mlv6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6mlv6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-836,SelfLink:/api/v1/namespaces/deployment-836/pods/nginx-deployment-7b8c6f4498-6mlv6,UID:0c1e6fc5-7679-495e-9deb-c9267900ae3a,ResourceVersion:24590191,Generation:0,CreationTimestamp:2020-02-16 15:11:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7647612a-0a69-471a-b9a1-ebbc6707d906 0xc0028fa827 0xc0028fa828}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxz58 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxz58,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hxz58 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028fa8a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028fa8c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 15:11:51.279: INFO: Pod "nginx-deployment-7b8c6f4498-76ghb" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-76ghb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-836,SelfLink:/api/v1/namespaces/deployment-836/pods/nginx-deployment-7b8c6f4498-76ghb,UID:f7d71e09-2cc0-4fa1-9842-5397eb696a74,ResourceVersion:24590052,Generation:0,CreationTimestamp:2020-02-16 15:11:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7647612a-0a69-471a-b9a1-ebbc6707d906 0xc0028fa947 0xc0028fa948}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxz58 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxz58,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hxz58 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028fa9b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028fa9d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:01 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:27 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:27 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:01 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-02-16 15:11:01 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-16 15:11:25 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://a2a77736810a0c40e3217569b63405648ad36d76f2cc9fab20b88c7619e6b8f5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 15:11:51.279: INFO: Pod "nginx-deployment-7b8c6f4498-bksm7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bksm7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-836,SelfLink:/api/v1/namespaces/deployment-836/pods/nginx-deployment-7b8c6f4498-bksm7,UID:6ac18839-1e20-4180-b6ce-2fc65b0229f4,ResourceVersion:24590190,Generation:0,CreationTimestamp:2020-02-16 15:11:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7647612a-0a69-471a-b9a1-ebbc6707d906 0xc0028faaa7 0xc0028faaa8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxz58 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxz58,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hxz58 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028fab10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028fab30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 15:11:51.279: INFO: Pod "nginx-deployment-7b8c6f4498-bln74" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bln74,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-836,SelfLink:/api/v1/namespaces/deployment-836/pods/nginx-deployment-7b8c6f4498-bln74,UID:0523eaab-8cd7-4be5-8244-1fb67d15efca,ResourceVersion:24590202,Generation:0,CreationTimestamp:2020-02-16 15:11:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7647612a-0a69-471a-b9a1-ebbc6707d906 0xc0028fabb7 0xc0028fabb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxz58 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxz58,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hxz58 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028fac30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028fac50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 15:11:51.280: INFO: Pod "nginx-deployment-7b8c6f4498-bws9l" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bws9l,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-836,SelfLink:/api/v1/namespaces/deployment-836/pods/nginx-deployment-7b8c6f4498-bws9l,UID:c7614e29-a244-4b3f-9a18-0a02cc67541d,ResourceVersion:24590194,Generation:0,CreationTimestamp:2020-02-16 15:11:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7647612a-0a69-471a-b9a1-ebbc6707d906 0xc0028facd7 0xc0028facd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxz58 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxz58,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hxz58 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028fad50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028fad70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 15:11:51.280: INFO: Pod "nginx-deployment-7b8c6f4498-cfvlt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-cfvlt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-836,SelfLink:/api/v1/namespaces/deployment-836/pods/nginx-deployment-7b8c6f4498-cfvlt,UID:f1b53f01-2578-408c-9cb1-25c86b8916b6,ResourceVersion:24590189,Generation:0,CreationTimestamp:2020-02-16 15:11:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7647612a-0a69-471a-b9a1-ebbc6707d906 0xc0028fadf7 0xc0028fadf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxz58 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxz58,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hxz58 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028fae60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028fae80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 15:11:51.282: INFO: Pod "nginx-deployment-7b8c6f4498-h2jtr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-h2jtr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-836,SelfLink:/api/v1/namespaces/deployment-836/pods/nginx-deployment-7b8c6f4498-h2jtr,UID:26023195-578f-4ef3-87e2-062c3c98aae3,ResourceVersion:24590217,Generation:0,CreationTimestamp:2020-02-16 15:11:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7647612a-0a69-471a-b9a1-ebbc6707d906 0xc0028faf07 0xc0028faf08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxz58 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxz58,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hxz58 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028faf90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028fafb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:39 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-16 15:11:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 15:11:51.283: INFO: Pod "nginx-deployment-7b8c6f4498-k4tkn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-k4tkn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-836,SelfLink:/api/v1/namespaces/deployment-836/pods/nginx-deployment-7b8c6f4498-k4tkn,UID:5ecfb07b-eeb5-4628-af08-5647e6d1cf60,ResourceVersion:24590227,Generation:0,CreationTimestamp:2020-02-16 15:11:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7647612a-0a69-471a-b9a1-ebbc6707d906 0xc0028fb077 0xc0028fb078}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxz58 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxz58,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hxz58 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028fb0f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028fb110}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:39 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-16 15:11:42 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 15:11:51.283: INFO: Pod "nginx-deployment-7b8c6f4498-lfljt" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lfljt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-836,SelfLink:/api/v1/namespaces/deployment-836/pods/nginx-deployment-7b8c6f4498-lfljt,UID:f206a83e-7e97-4304-9eea-6e5ba450cf35,ResourceVersion:24590084,Generation:0,CreationTimestamp:2020-02-16 15:11:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7647612a-0a69-471a-b9a1-ebbc6707d906 0xc0028fb1d7 0xc0028fb1d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxz58 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxz58,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hxz58 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028fb250} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028fb270}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:01 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:32 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:01 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.5,StartTime:2020-02-16 15:11:01 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-16 15:11:31 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f7fd994dbf803334ad5d74195b0278253d27165b09507421aa92e69fbd771827}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 15:11:51.283: INFO: Pod "nginx-deployment-7b8c6f4498-ljmrx" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ljmrx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-836,SelfLink:/api/v1/namespaces/deployment-836/pods/nginx-deployment-7b8c6f4498-ljmrx,UID:e50cac32-ff97-49c1-849b-0f2a93604a1c,ResourceVersion:24590090,Generation:0,CreationTimestamp:2020-02-16 15:11:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7647612a-0a69-471a-b9a1-ebbc6707d906 0xc0028fb347 0xc0028fb348}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxz58 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxz58,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hxz58 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028fb3c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028fb3e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:01 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:32 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:01 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-16 15:11:01 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-16 15:11:30 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://82677a1e2dff53a0885e7f6846cb9ac515666a9dcb0b87c20c2d34663bcf357a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 15:11:51.283: INFO: Pod "nginx-deployment-7b8c6f4498-mvg2k" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mvg2k,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-836,SelfLink:/api/v1/namespaces/deployment-836/pods/nginx-deployment-7b8c6f4498-mvg2k,UID:1d6cfe46-db69-4984-b687-966d79e7866d,ResourceVersion:24590055,Generation:0,CreationTimestamp:2020-02-16 15:11:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7647612a-0a69-471a-b9a1-ebbc6707d906 0xc0028fb4b7 0xc0028fb4b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxz58 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxz58,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hxz58 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028fb520} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028fb540}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:02 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:27 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:27 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:01 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.7,StartTime:2020-02-16 15:11:02 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-16 15:11:26 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://c883b62b385b56340212106e080cbcd386820e70c31c5cfe3d13516f8a3c8089}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 15:11:51.284: INFO: Pod "nginx-deployment-7b8c6f4498-n626s" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-n626s,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-836,SelfLink:/api/v1/namespaces/deployment-836/pods/nginx-deployment-7b8c6f4498-n626s,UID:338ed319-9ff9-4533-abc8-265d5423bba5,ResourceVersion:24590196,Generation:0,CreationTimestamp:2020-02-16 15:11:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7647612a-0a69-471a-b9a1-ebbc6707d906 0xc0028fb617 0xc0028fb618}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxz58 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxz58,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hxz58 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028fb690} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028fb6b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 15:11:51.284: INFO: Pod "nginx-deployment-7b8c6f4498-ntxzl" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ntxzl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-836,SelfLink:/api/v1/namespaces/deployment-836/pods/nginx-deployment-7b8c6f4498-ntxzl,UID:5e731090-e5ca-48f9-9c05-63c60617ab3b,ResourceVersion:24590049,Generation:0,CreationTimestamp:2020-02-16 15:11:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7647612a-0a69-471a-b9a1-ebbc6707d906 0xc0028fb737 0xc0028fb738}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxz58 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxz58,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hxz58 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028fb7a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028fb7c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:02 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:27 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:27 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:01 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2020-02-16 15:11:02 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-16 15:11:26 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://dc12d59f7a4fe6404258d385e3193affe66822e02b0e0ad86dd012822fd15042}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 15:11:51.284: INFO: Pod "nginx-deployment-7b8c6f4498-p68ts" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-p68ts,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-836,SelfLink:/api/v1/namespaces/deployment-836/pods/nginx-deployment-7b8c6f4498-p68ts,UID:ff2756c7-6588-4aa1-a174-6e9f9db17aa0,ResourceVersion:24590200,Generation:0,CreationTimestamp:2020-02-16 15:11:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7647612a-0a69-471a-b9a1-ebbc6707d906 0xc0028fb897 0xc0028fb898}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxz58 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxz58,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hxz58 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028fb900} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028fb920}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 15:11:51.284: INFO: Pod "nginx-deployment-7b8c6f4498-sv5pv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-sv5pv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-836,SelfLink:/api/v1/namespaces/deployment-836/pods/nginx-deployment-7b8c6f4498-sv5pv,UID:d2d4c884-4562-4c69-9650-eaab3c1e4bcf,ResourceVersion:24590187,Generation:0,CreationTimestamp:2020-02-16 15:11:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7647612a-0a69-471a-b9a1-ebbc6707d906 0xc0028fb9a7 0xc0028fb9a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxz58 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxz58,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hxz58 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028fba20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028fba40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 15:11:51.285: INFO: Pod "nginx-deployment-7b8c6f4498-t7nxh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-t7nxh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-836,SelfLink:/api/v1/namespaces/deployment-836/pods/nginx-deployment-7b8c6f4498-t7nxh,UID:0816a682-2b89-4186-8b7c-3492e195f65e,ResourceVersion:24590209,Generation:0,CreationTimestamp:2020-02-16 15:11:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7647612a-0a69-471a-b9a1-ebbc6707d906 0xc0028fbac7 0xc0028fbac8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxz58 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxz58,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hxz58 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028fbb30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028fbb50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:39 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-16 15:11:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 15:11:51.285: INFO: Pod "nginx-deployment-7b8c6f4498-zjd2t" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zjd2t,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-836,SelfLink:/api/v1/namespaces/deployment-836/pods/nginx-deployment-7b8c6f4498-zjd2t,UID:fc3230d7-00f0-4184-b7ed-283e169b13a5,ResourceVersion:24590076,Generation:0,CreationTimestamp:2020-02-16 15:11:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 7647612a-0a69-471a-b9a1-ebbc6707d906 0xc0028fbc17 0xc0028fbc18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxz58 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxz58,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hxz58 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028fbc90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028fbcb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:01 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:32 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:11:01 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-16 15:11:01 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-16 15:11:30 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://01f18c6f93b5e1eac6ac29038a81f1acec43eef3a606a3bdd5d81b4d796621b6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 15:11:51.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-836" for this suite.
Feb 16 15:12:58.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 15:12:59.015: INFO: namespace deployment-836 deletion completed in 1m6.235968118s

• [SLOW TEST:119.132 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 15:12:59.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-1009
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1009 to expose endpoints map[]
Feb 16 15:13:01.448: INFO: successfully validated that service endpoint-test2 in namespace services-1009 exposes endpoints map[] (234.462342ms elapsed)
STEP: Creating pod pod1 in namespace services-1009
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1009 to expose endpoints map[pod1:[80]]
Feb 16 15:13:05.702: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.234022764s elapsed, will retry)
Feb 16 15:13:10.836: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (9.367272256s elapsed, will retry)
Feb 16 15:13:15.928: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (14.459813483s elapsed, will retry)
Feb 16 15:13:18.979: INFO: successfully validated that service endpoint-test2 in namespace services-1009 exposes endpoints map[pod1:[80]] (17.510614129s elapsed)
STEP: Creating pod pod2 in namespace services-1009
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1009 to expose endpoints map[pod1:[80] pod2:[80]]
Feb 16 15:13:23.624: INFO: Unexpected endpoints: found map[538eef40-5360-4f31-a4d9-6c446745a3f0:[80]], expected map[pod1:[80] pod2:[80]] (4.63890847s elapsed, will retry)
Feb 16 15:13:28.190: INFO: successfully validated that service endpoint-test2 in namespace services-1009 exposes endpoints map[pod1:[80] pod2:[80]] (9.205378929s elapsed)
STEP: Deleting pod pod1 in namespace services-1009
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1009 to expose endpoints map[pod2:[80]]
Feb 16 15:13:28.290: INFO: successfully validated that service endpoint-test2 in namespace services-1009 exposes endpoints map[pod2:[80]] (65.086672ms elapsed)
STEP: Deleting pod pod2 in namespace services-1009
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1009 to expose endpoints map[]
Feb 16 15:13:28.332: INFO: successfully validated that service endpoint-test2 in namespace services-1009 exposes endpoints map[] (11.056812ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 15:13:28.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1009" for this suite.
Feb 16 15:13:50.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 15:13:50.661: INFO: namespace services-1009 deletion completed in 22.182104793s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:51.646 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 15:13:50.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 15:13:50.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5430" for this suite.
Feb 16 15:14:12.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 15:14:13.009: INFO: namespace pods-5430 deletion completed in 22.147782754s

• [SLOW TEST:22.347 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 15:14:13.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-c983c3cd-437c-4a84-a378-db20ce45319d
STEP: Creating secret with name s-test-opt-upd-ef2dc249-60a1-422a-958c-5db283aab4e9
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-c983c3cd-437c-4a84-a378-db20ce45319d
STEP: Updating secret s-test-opt-upd-ef2dc249-60a1-422a-958c-5db283aab4e9
STEP: Creating secret with name s-test-opt-create-242cc831-9f68-4844-9fb2-fa1920f17189
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 15:14:29.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4725" for this suite.
Feb 16 15:14:53.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 15:14:53.914: INFO: namespace secrets-4725 deletion completed in 24.266334318s

• [SLOW TEST:40.904 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 15:14:53.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Feb 16 15:14:54.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Feb 16 15:14:54.158: INFO: stderr: ""
Feb 16 15:14:54.159: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 15:14:54.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1289" for this suite.
Feb 16 15:15:00.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 15:15:00.376: INFO: namespace kubectl-1289 deletion completed in 6.212608818s

• [SLOW TEST:6.461 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 15:15:00.377: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 15:15:00.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9154" for this suite.
Feb 16 15:15:06.601: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 15:15:06.709: INFO: namespace services-9154 deletion completed in 6.142265904s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.333 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 15:15:06.710: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 16 15:15:06.870: INFO: Number of nodes with available pods: 0
Feb 16 15:15:06.870: INFO: Node iruya-node is running more than one daemon pod
Feb 16 15:15:09.527: INFO: Number of nodes with available pods: 0
Feb 16 15:15:09.527: INFO: Node iruya-node is running more than one daemon pod
Feb 16 15:15:10.325: INFO: Number of nodes with available pods: 0
Feb 16 15:15:10.326: INFO: Node iruya-node is running more than one daemon pod
Feb 16 15:15:10.919: INFO: Number of nodes with available pods: 0
Feb 16 15:15:10.919: INFO: Node iruya-node is running more than one daemon pod
Feb 16 15:15:12.983: INFO: Number of nodes with available pods: 0
Feb 16 15:15:12.983: INFO: Node iruya-node is running more than one daemon pod
Feb 16 15:15:15.754: INFO: Number of nodes with available pods: 0
Feb 16 15:15:15.754: INFO: Node iruya-node is running more than one daemon pod
Feb 16 15:15:16.112: INFO: Number of nodes with available pods: 0
Feb 16 15:15:16.112: INFO: Node iruya-node is running more than one daemon pod
Feb 16 15:15:17.274: INFO: Number of nodes with available pods: 0
Feb 16 15:15:17.274: INFO: Node iruya-node is running more than one daemon pod
Feb 16 15:15:18.070: INFO: Number of nodes with available pods: 0
Feb 16 15:15:18.070: INFO: Node iruya-node is running more than one daemon pod
Feb 16 15:15:18.888: INFO: Number of nodes with available pods: 1
Feb 16 15:15:18.888: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 16 15:15:19.926: INFO: Number of nodes with available pods: 2
Feb 16 15:15:19.926: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Feb 16 15:15:20.045: INFO: Number of nodes with available pods: 1
Feb 16 15:15:20.045: INFO: Node iruya-node is running more than one daemon pod
Feb 16 15:15:21.063: INFO: Number of nodes with available pods: 1
Feb 16 15:15:21.063: INFO: Node iruya-node is running more than one daemon pod
Feb 16 15:15:22.076: INFO: Number of nodes with available pods: 1
Feb 16 15:15:22.077: INFO: Node iruya-node is running more than one daemon pod
Feb 16 15:15:23.079: INFO: Number of nodes with available pods: 1
Feb 16 15:15:23.079: INFO: Node iruya-node is running more than one daemon pod
Feb 16 15:15:24.180: INFO: Number of nodes with available pods: 1
Feb 16 15:15:24.180: INFO: Node iruya-node is running more than one daemon pod
Feb 16 15:15:25.060: INFO: Number of nodes with available pods: 1
Feb 16 15:15:25.060: INFO: Node iruya-node is running more than one daemon pod
Feb 16 15:15:26.073: INFO: Number of nodes with available pods: 1
Feb 16 15:15:26.073: INFO: Node iruya-node is running more than one daemon pod
Feb 16 15:15:27.075: INFO: Number of nodes with available pods: 1
Feb 16 15:15:27.075: INFO: Node iruya-node is running more than one daemon pod
Feb 16 15:15:28.061: INFO: Number of nodes with available pods: 2
Feb 16 15:15:28.061: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6388, will wait for the garbage collector to delete the pods
Feb 16 15:15:28.130: INFO: Deleting DaemonSet.extensions daemon-set took: 11.69981ms
Feb 16 15:15:28.431: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.455839ms
Feb 16 15:15:46.677: INFO: Number of nodes with available pods: 0
Feb 16 15:15:46.677: INFO: Number of running nodes: 0, number of available pods: 0
Feb 16 15:15:46.683: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6388/daemonsets","resourceVersion":"24590946"},"items":null}

Feb 16 15:15:46.686: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6388/pods","resourceVersion":"24590946"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 15:15:46.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-6388" for this suite.
Feb 16 15:15:52.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 15:15:52.890: INFO: namespace daemonsets-6388 deletion completed in 6.185144339s

• [SLOW TEST:46.181 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 15:15:52.891: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 16 15:15:53.142: INFO: Creating deployment "test-recreate-deployment"
Feb 16 15:15:53.151: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Feb 16 15:15:53.258: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Feb 16 15:15:55.269: INFO: Waiting deployment "test-recreate-deployment" to complete
Feb 16 15:15:55.273: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462953, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462953, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462953, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462953, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 16 15:15:57.284: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462953, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462953, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462953, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462953, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 16 15:15:59.282: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462953, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462953, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462953, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717462953, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 16 15:16:01.283: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Feb 16 15:16:01.301: INFO: Updating deployment test-recreate-deployment
Feb 16 15:16:01.301: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb 16 15:16:01.692: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-5658,SelfLink:/apis/apps/v1/namespaces/deployment-5658/deployments/test-recreate-deployment,UID:4f837394-a10d-4cd4-816e-c9f89c18d81b,ResourceVersion:24591033,Generation:2,CreationTimestamp:2020-02-16 15:15:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-02-16 15:16:01 +0000 UTC 2020-02-16 15:16:01 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-16 15:16:01 +0000 UTC 2020-02-16 15:15:53 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Feb 16 15:16:01.697: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-5658,SelfLink:/apis/apps/v1/namespaces/deployment-5658/replicasets/test-recreate-deployment-5c8c9cc69d,UID:43ec0343-934a-400c-9ffd-314b4e2f6432,ResourceVersion:24591031,Generation:1,CreationTimestamp:2020-02-16 15:16:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 4f837394-a10d-4cd4-816e-c9f89c18d81b 0xc0029d52b7 0xc0029d52b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 16 15:16:01.697: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Feb 16 15:16:01.697: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-5658,SelfLink:/apis/apps/v1/namespaces/deployment-5658/replicasets/test-recreate-deployment-6df85df6b9,UID:c81bc982-552d-46b6-ae1d-a8456d613788,ResourceVersion:24591022,Generation:2,CreationTimestamp:2020-02-16 15:15:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 4f837394-a10d-4cd4-816e-c9f89c18d81b 0xc0029d5387 0xc0029d5388}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 16 15:16:01.702: INFO: Pod "test-recreate-deployment-5c8c9cc69d-5s6ml" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-5s6ml,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-5658,SelfLink:/api/v1/namespaces/deployment-5658/pods/test-recreate-deployment-5c8c9cc69d-5s6ml,UID:ae3a8b7f-c27b-448b-804e-5047b45df3ab,ResourceVersion:24591030,Generation:0,CreationTimestamp:2020-02-16 15:16:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 43ec0343-934a-400c-9ffd-314b4e2f6432 0xc00226a6a7 0xc00226a6a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6px5m {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6px5m,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6px5m true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00226a720} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00226a740}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 15:16:01 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 15:16:01.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5658" for this suite.
Feb 16 15:16:09.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 15:16:09.946: INFO: namespace deployment-5658 deletion completed in 8.235912036s

• [SLOW TEST:17.055 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 15:16:09.946: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb 16 15:16:10.094: INFO: Waiting up to 5m0s for pod "pod-bdb18404-03bd-4d50-8f58-1c01236220ce" in namespace "emptydir-891" to be "success or failure"
Feb 16 15:16:10.141: INFO: Pod "pod-bdb18404-03bd-4d50-8f58-1c01236220ce": Phase="Pending", Reason="", readiness=false. Elapsed: 46.28368ms
Feb 16 15:16:12.150: INFO: Pod "pod-bdb18404-03bd-4d50-8f58-1c01236220ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055288247s
Feb 16 15:16:14.162: INFO: Pod "pod-bdb18404-03bd-4d50-8f58-1c01236220ce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067256984s
Feb 16 15:16:16.174: INFO: Pod "pod-bdb18404-03bd-4d50-8f58-1c01236220ce": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079153864s
Feb 16 15:16:18.185: INFO: Pod "pod-bdb18404-03bd-4d50-8f58-1c01236220ce": Phase="Pending", Reason="", readiness=false. Elapsed: 8.090073287s
Feb 16 15:16:20.192: INFO: Pod "pod-bdb18404-03bd-4d50-8f58-1c01236220ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.097892589s
STEP: Saw pod success
Feb 16 15:16:20.192: INFO: Pod "pod-bdb18404-03bd-4d50-8f58-1c01236220ce" satisfied condition "success or failure"
Feb 16 15:16:20.198: INFO: Trying to get logs from node iruya-node pod pod-bdb18404-03bd-4d50-8f58-1c01236220ce container test-container: 
STEP: delete the pod
Feb 16 15:16:20.807: INFO: Waiting for pod pod-bdb18404-03bd-4d50-8f58-1c01236220ce to disappear
Feb 16 15:16:20.816: INFO: Pod pod-bdb18404-03bd-4d50-8f58-1c01236220ce no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 15:16:20.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-891" for this suite.
Feb 16 15:16:26.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 15:16:27.010: INFO: namespace emptydir-891 deletion completed in 6.184845052s

• [SLOW TEST:17.063 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 15:16:27.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-9tv7
STEP: Creating a pod to test atomic-volume-subpath
Feb 16 15:16:27.141: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-9tv7" in namespace "subpath-1976" to be "success or failure"
Feb 16 15:16:27.146: INFO: Pod "pod-subpath-test-secret-9tv7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.389906ms
Feb 16 15:16:29.155: INFO: Pod "pod-subpath-test-secret-9tv7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013785514s
Feb 16 15:16:31.164: INFO: Pod "pod-subpath-test-secret-9tv7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02292559s
Feb 16 15:16:33.176: INFO: Pod "pod-subpath-test-secret-9tv7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035102944s
Feb 16 15:16:35.187: INFO: Pod "pod-subpath-test-secret-9tv7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.045932243s
Feb 16 15:16:37.198: INFO: Pod "pod-subpath-test-secret-9tv7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.057014959s
Feb 16 15:16:39.209: INFO: Pod "pod-subpath-test-secret-9tv7": Phase="Running", Reason="", readiness=true. Elapsed: 12.068347714s
Feb 16 15:16:41.217: INFO: Pod "pod-subpath-test-secret-9tv7": Phase="Running", Reason="", readiness=true. Elapsed: 14.075971963s
Feb 16 15:16:43.232: INFO: Pod "pod-subpath-test-secret-9tv7": Phase="Running", Reason="", readiness=true. Elapsed: 16.091113273s
Feb 16 15:16:45.241: INFO: Pod "pod-subpath-test-secret-9tv7": Phase="Running", Reason="", readiness=true. Elapsed: 18.09977672s
Feb 16 15:16:47.249: INFO: Pod "pod-subpath-test-secret-9tv7": Phase="Running", Reason="", readiness=true. Elapsed: 20.108485379s
Feb 16 15:16:49.258: INFO: Pod "pod-subpath-test-secret-9tv7": Phase="Running", Reason="", readiness=true. Elapsed: 22.116888832s
Feb 16 15:16:51.266: INFO: Pod "pod-subpath-test-secret-9tv7": Phase="Running", Reason="", readiness=true. Elapsed: 24.124900414s
Feb 16 15:16:53.274: INFO: Pod "pod-subpath-test-secret-9tv7": Phase="Running", Reason="", readiness=true. Elapsed: 26.133126897s
Feb 16 15:16:55.281: INFO: Pod "pod-subpath-test-secret-9tv7": Phase="Running", Reason="", readiness=true. Elapsed: 28.13977854s
Feb 16 15:16:57.294: INFO: Pod "pod-subpath-test-secret-9tv7": Phase="Running", Reason="", readiness=true. Elapsed: 30.153505248s
Feb 16 15:16:59.302: INFO: Pod "pod-subpath-test-secret-9tv7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.161373578s
STEP: Saw pod success
Feb 16 15:16:59.302: INFO: Pod "pod-subpath-test-secret-9tv7" satisfied condition "success or failure"
Feb 16 15:16:59.311: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-9tv7 container test-container-subpath-secret-9tv7: 
STEP: delete the pod
Feb 16 15:16:59.380: INFO: Waiting for pod pod-subpath-test-secret-9tv7 to disappear
Feb 16 15:16:59.389: INFO: Pod pod-subpath-test-secret-9tv7 no longer exists
STEP: Deleting pod pod-subpath-test-secret-9tv7
Feb 16 15:16:59.389: INFO: Deleting pod "pod-subpath-test-secret-9tv7" in namespace "subpath-1976"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 15:16:59.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1976" for this suite.
Feb 16 15:17:05.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 15:17:05.531: INFO: namespace subpath-1976 deletion completed in 6.134679532s

• [SLOW TEST:38.521 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 15:17:05.532: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb 16 15:17:05.626: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 15:17:24.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5979" for this suite.
Feb 16 15:18:02.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 15:18:02.998: INFO: namespace init-container-5979 deletion completed in 38.128002472s

• [SLOW TEST:57.466 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 15:18:02.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0216 15:18:05.244122       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 16 15:18:05.244: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 15:18:05.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8736" for this suite.
Feb 16 15:18:11.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 15:18:11.479: INFO: namespace gc-8736 deletion completed in 6.229470188s

• [SLOW TEST:8.481 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 15:18:11.480: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Feb 16 15:18:11.605: INFO: Waiting up to 5m0s for pod "var-expansion-157c3550-1538-455a-b998-237bfc06b699" in namespace "var-expansion-3611" to be "success or failure"
Feb 16 15:18:11.638: INFO: Pod "var-expansion-157c3550-1538-455a-b998-237bfc06b699": Phase="Pending", Reason="", readiness=false. Elapsed: 32.749872ms
Feb 16 15:18:13.647: INFO: Pod "var-expansion-157c3550-1538-455a-b998-237bfc06b699": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042020377s
Feb 16 15:18:15.653: INFO: Pod "var-expansion-157c3550-1538-455a-b998-237bfc06b699": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048174698s
Feb 16 15:18:17.663: INFO: Pod "var-expansion-157c3550-1538-455a-b998-237bfc06b699": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057470664s
Feb 16 15:18:19.674: INFO: Pod "var-expansion-157c3550-1538-455a-b998-237bfc06b699": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.06853849s
STEP: Saw pod success
Feb 16 15:18:19.674: INFO: Pod "var-expansion-157c3550-1538-455a-b998-237bfc06b699" satisfied condition "success or failure"
Feb 16 15:18:19.679: INFO: Trying to get logs from node iruya-node pod var-expansion-157c3550-1538-455a-b998-237bfc06b699 container dapi-container: 
STEP: delete the pod
Feb 16 15:18:19.814: INFO: Waiting for pod var-expansion-157c3550-1538-455a-b998-237bfc06b699 to disappear
Feb 16 15:18:19.823: INFO: Pod var-expansion-157c3550-1538-455a-b998-237bfc06b699 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 15:18:19.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3611" for this suite.
Feb 16 15:18:25.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 15:18:25.965: INFO: namespace var-expansion-3611 deletion completed in 6.136497067s

• [SLOW TEST:14.485 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 15:18:25.965: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 16 15:18:26.496: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8321b770-5c60-49cc-a6c1-160f8757213f" in namespace "downward-api-6620" to be "success or failure"
Feb 16 15:18:26.528: INFO: Pod "downwardapi-volume-8321b770-5c60-49cc-a6c1-160f8757213f": Phase="Pending", Reason="", readiness=false. Elapsed: 30.82097ms
Feb 16 15:18:28.537: INFO: Pod "downwardapi-volume-8321b770-5c60-49cc-a6c1-160f8757213f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040711756s
Feb 16 15:18:30.563: INFO: Pod "downwardapi-volume-8321b770-5c60-49cc-a6c1-160f8757213f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065846844s
Feb 16 15:18:32.580: INFO: Pod "downwardapi-volume-8321b770-5c60-49cc-a6c1-160f8757213f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083102335s
Feb 16 15:18:34.605: INFO: Pod "downwardapi-volume-8321b770-5c60-49cc-a6c1-160f8757213f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.107880473s
Feb 16 15:18:36.622: INFO: Pod "downwardapi-volume-8321b770-5c60-49cc-a6c1-160f8757213f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.125508699s
STEP: Saw pod success
Feb 16 15:18:36.623: INFO: Pod "downwardapi-volume-8321b770-5c60-49cc-a6c1-160f8757213f" satisfied condition "success or failure"
Feb 16 15:18:36.647: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-8321b770-5c60-49cc-a6c1-160f8757213f container client-container: 
STEP: delete the pod
Feb 16 15:18:36.923: INFO: Waiting for pod downwardapi-volume-8321b770-5c60-49cc-a6c1-160f8757213f to disappear
Feb 16 15:18:36.933: INFO: Pod downwardapi-volume-8321b770-5c60-49cc-a6c1-160f8757213f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 15:18:36.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6620" for this suite.
Feb 16 15:18:42.971: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 15:18:43.093: INFO: namespace downward-api-6620 deletion completed in 6.152043831s

• [SLOW TEST:17.128 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 15:18:43.094: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 16 15:18:43.160: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3d5a9ee3-da37-4543-a0b1-b74765dd068c" in namespace "downward-api-4349" to be "success or failure"
Feb 16 15:18:43.165: INFO: Pod "downwardapi-volume-3d5a9ee3-da37-4543-a0b1-b74765dd068c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.552757ms
Feb 16 15:18:45.189: INFO: Pod "downwardapi-volume-3d5a9ee3-da37-4543-a0b1-b74765dd068c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028913676s
Feb 16 15:18:47.196: INFO: Pod "downwardapi-volume-3d5a9ee3-da37-4543-a0b1-b74765dd068c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036112634s
Feb 16 15:18:49.203: INFO: Pod "downwardapi-volume-3d5a9ee3-da37-4543-a0b1-b74765dd068c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042759606s
Feb 16 15:18:51.212: INFO: Pod "downwardapi-volume-3d5a9ee3-da37-4543-a0b1-b74765dd068c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052331841s
Feb 16 15:18:53.248: INFO: Pod "downwardapi-volume-3d5a9ee3-da37-4543-a0b1-b74765dd068c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.087737738s
STEP: Saw pod success
Feb 16 15:18:53.248: INFO: Pod "downwardapi-volume-3d5a9ee3-da37-4543-a0b1-b74765dd068c" satisfied condition "success or failure"
Feb 16 15:18:53.253: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-3d5a9ee3-da37-4543-a0b1-b74765dd068c container client-container: 
STEP: delete the pod
Feb 16 15:18:53.540: INFO: Waiting for pod downwardapi-volume-3d5a9ee3-da37-4543-a0b1-b74765dd068c to disappear
Feb 16 15:18:53.560: INFO: Pod downwardapi-volume-3d5a9ee3-da37-4543-a0b1-b74765dd068c no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 15:18:53.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4349" for this suite.
Feb 16 15:18:59.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 15:18:59.885: INFO: namespace downward-api-4349 deletion completed in 6.316595965s

• [SLOW TEST:16.791 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 16 15:18:59.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Feb 16 15:19:00.043: INFO: Waiting up to 5m0s for pod "var-expansion-8ecf7f79-77f5-4f16-9740-21b715ec6c57" in namespace "var-expansion-9863" to be "success or failure"
Feb 16 15:19:00.062: INFO: Pod "var-expansion-8ecf7f79-77f5-4f16-9740-21b715ec6c57": Phase="Pending", Reason="", readiness=false. Elapsed: 18.772883ms
Feb 16 15:19:02.076: INFO: Pod "var-expansion-8ecf7f79-77f5-4f16-9740-21b715ec6c57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033022401s
Feb 16 15:19:04.163: INFO: Pod "var-expansion-8ecf7f79-77f5-4f16-9740-21b715ec6c57": Phase="Pending", Reason="", readiness=false. Elapsed: 4.12018591s
Feb 16 15:19:06.173: INFO: Pod "var-expansion-8ecf7f79-77f5-4f16-9740-21b715ec6c57": Phase="Pending", Reason="", readiness=false. Elapsed: 6.129531214s
Feb 16 15:19:08.182: INFO: Pod "var-expansion-8ecf7f79-77f5-4f16-9740-21b715ec6c57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.139151343s
STEP: Saw pod success
Feb 16 15:19:08.183: INFO: Pod "var-expansion-8ecf7f79-77f5-4f16-9740-21b715ec6c57" satisfied condition "success or failure"
Feb 16 15:19:08.185: INFO: Trying to get logs from node iruya-node pod var-expansion-8ecf7f79-77f5-4f16-9740-21b715ec6c57 container dapi-container: 
STEP: delete the pod
Feb 16 15:19:08.226: INFO: Waiting for pod var-expansion-8ecf7f79-77f5-4f16-9740-21b715ec6c57 to disappear
Feb 16 15:19:08.281: INFO: Pod var-expansion-8ecf7f79-77f5-4f16-9740-21b715ec6c57 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 16 15:19:08.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9863" for this suite.
Feb 16 15:19:14.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 15:19:14.471: INFO: namespace var-expansion-9863 deletion completed in 6.17737711s

• [SLOW TEST:14.585 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
Feb 16 15:19:14.471: INFO: Running AfterSuite actions on all nodes
Feb 16 15:19:14.471: INFO: Running AfterSuite actions on node 1
Feb 16 15:19:14.471: INFO: Skipping dumping logs from cluster

Ran 215 of 4412 Specs in 8595.599 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped
PASS