I0217 12:56:16.353600 8 e2e.go:243] Starting e2e run "4fc5e0ed-9db9-4734-b85d-1fa5d7dff60b" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1581944175 - Will randomize all specs Will run 215 of 4412 specs Feb 17 12:56:16.809: INFO: >>> kubeConfig: /root/.kube/config Feb 17 12:56:16.815: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 17 12:56:16.850: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 17 12:56:16.925: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 17 12:56:16.925: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 17 12:56:16.925: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 17 12:56:17.012: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 17 12:56:17.012: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 17 12:56:17.012: INFO: e2e test version: v1.15.7 Feb 17 12:56:17.014: INFO: kube-apiserver version: v1.15.1 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 12:56:17.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Feb 17 12:56:17.180: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-67e07a78-50d3-483b-9ab5-679bb16a34fa STEP: Creating a pod to test consume secrets Feb 17 12:56:17.204: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-978624a7-0017-4b55-811a-3dda724c8178" in namespace "projected-237" to be "success or failure" Feb 17 12:56:17.217: INFO: Pod "pod-projected-secrets-978624a7-0017-4b55-811a-3dda724c8178": Phase="Pending", Reason="", readiness=false. Elapsed: 12.085567ms Feb 17 12:56:19.224: INFO: Pod "pod-projected-secrets-978624a7-0017-4b55-811a-3dda724c8178": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019648921s Feb 17 12:56:21.231: INFO: Pod "pod-projected-secrets-978624a7-0017-4b55-811a-3dda724c8178": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026299843s Feb 17 12:56:23.246: INFO: Pod "pod-projected-secrets-978624a7-0017-4b55-811a-3dda724c8178": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041320704s Feb 17 12:56:25.254: INFO: Pod "pod-projected-secrets-978624a7-0017-4b55-811a-3dda724c8178": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049668763s Feb 17 12:56:27.269: INFO: Pod "pod-projected-secrets-978624a7-0017-4b55-811a-3dda724c8178": Phase="Pending", Reason="", readiness=false. Elapsed: 10.064647599s Feb 17 12:56:29.751: INFO: Pod "pod-projected-secrets-978624a7-0017-4b55-811a-3dda724c8178": Phase="Pending", Reason="", readiness=false. Elapsed: 12.546223748s Feb 17 12:56:31.766: INFO: Pod "pod-projected-secrets-978624a7-0017-4b55-811a-3dda724c8178": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.561081917s STEP: Saw pod success Feb 17 12:56:31.766: INFO: Pod "pod-projected-secrets-978624a7-0017-4b55-811a-3dda724c8178" satisfied condition "success or failure" Feb 17 12:56:31.773: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-978624a7-0017-4b55-811a-3dda724c8178 container secret-volume-test: STEP: delete the pod Feb 17 12:56:32.046: INFO: Waiting for pod pod-projected-secrets-978624a7-0017-4b55-811a-3dda724c8178 to disappear Feb 17 12:56:32.158: INFO: Pod pod-projected-secrets-978624a7-0017-4b55-811a-3dda724c8178 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 12:56:32.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-237" for this suite. Feb 17 12:56:38.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:56:38.345: INFO: namespace projected-237 deletion completed in 6.177105814s • [SLOW TEST:21.330 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 12:56:38.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Feb 17 12:56:38.429: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 17 12:56:38.454: INFO: Waiting for terminating namespaces to be deleted... Feb 17 12:56:38.456: INFO: Logging pods the kubelet thinks is on node iruya-node before test Feb 17 12:56:38.488: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Feb 17 12:56:38.488: INFO: Container weave ready: true, restart count 0 Feb 17 12:56:38.488: INFO: Container weave-npc ready: true, restart count 0 Feb 17 12:56:38.488: INFO: kube-bench-j7kcs from default started at 2020-02-11 06:42:30 +0000 UTC (1 container statuses recorded) Feb 17 12:56:38.488: INFO: Container kube-bench ready: false, restart count 0 Feb 17 12:56:38.488: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Feb 17 12:56:38.488: INFO: Container kube-proxy ready: true, restart count 0 Feb 17 12:56:38.488: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Feb 17 12:56:38.500: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Feb 17 12:56:38.500: INFO: Container kube-controller-manager ready: true, restart count 23 Feb 17 12:56:38.500: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Feb 17 12:56:38.500: INFO: Container kube-proxy ready: true, restart count 0 Feb 17 12:56:38.500: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Feb 17 12:56:38.500: INFO: Container kube-apiserver ready: true, restart count 0 Feb 17 12:56:38.500: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Feb 17 12:56:38.500: INFO: Container kube-scheduler ready: true, restart count 15 Feb 17 12:56:38.500: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Feb 17 12:56:38.500: INFO: Container coredns ready: true, restart count 0 Feb 17 12:56:38.500: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Feb 17 12:56:38.500: INFO: Container coredns ready: true, restart count 0 Feb 17 12:56:38.500: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Feb 17 12:56:38.500: INFO: Container etcd ready: true, restart count 0 Feb 17 12:56:38.500: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Feb 17 12:56:38.500: INFO: Container weave ready: true, restart count 0 Feb 17 12:56:38.500: INFO: Container weave-npc ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-e341dca0-717c-4c39-9f5b-e8ad04aec8b6 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-e341dca0-717c-4c39-9f5b-e8ad04aec8b6 off the node iruya-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-e341dca0-717c-4c39-9f5b-e8ad04aec8b6 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 12:56:58.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3761" for this suite. Feb 17 12:57:12.822: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:57:13.045: INFO: namespace sched-pred-3761 deletion completed in 14.247032066s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:34.699 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 12:57:13.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Feb 17 12:57:13.188: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 12:57:31.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9935" for this suite. Feb 17 12:57:37.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:57:37.323: INFO: namespace pods-9935 deletion completed in 6.239241066s • [SLOW TEST:24.278 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 12:57:37.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Feb 17 12:57:37.399: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix750468952/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 12:57:37.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2960" for this suite. Feb 17 12:57:43.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:57:43.653: INFO: namespace kubectl-2960 deletion completed in 6.18079485s • [SLOW TEST:6.328 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 12:57:43.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 17 12:57:43.850: INFO: Create a RollingUpdate DaemonSet Feb 17 12:57:43.899: INFO: Check that daemon pods launch on every node of the cluster Feb 17 12:57:43.990: INFO: Number of nodes with available pods: 0 Feb 17 12:57:43.990: INFO: Node iruya-node is running more than one daemon pod Feb 17 12:57:45.008: INFO: Number of nodes with available pods: 0 Feb 17 12:57:45.008: INFO: Node iruya-node is running more than one daemon pod Feb 17 12:57:46.176: INFO: Number of nodes with available pods: 0 Feb 17 12:57:46.176: INFO: Node iruya-node is running more than one daemon pod Feb 17 12:57:47.017: INFO: Number of nodes with available pods: 0 Feb 17 12:57:47.017: INFO: Node iruya-node is running more than one daemon pod Feb 17 12:57:48.038: INFO: Number of nodes with available pods: 0 Feb 17 12:57:48.038: INFO: Node iruya-node is running more than one daemon pod Feb 17 12:57:49.011: INFO: Number of nodes with available pods: 0 Feb 17 12:57:49.011: INFO: Node iruya-node is running more than one daemon pod Feb 17 12:57:50.888: INFO: Number of nodes with available pods: 0 Feb 17 12:57:50.888: INFO: Node iruya-node is running more than one daemon pod Feb 17 12:57:51.605: INFO: Number of nodes with available pods: 0 Feb 17 12:57:51.605: INFO: Node iruya-node is running more than one daemon pod Feb 17 12:57:52.907: INFO: Number of nodes with available pods: 0 Feb 17 12:57:52.907: INFO: Node iruya-node is running more than one daemon pod Feb 17 12:57:53.407: INFO: Number of nodes with available pods: 0 Feb 17 12:57:53.407: INFO: Node iruya-node is running more than one daemon pod Feb 17 12:57:54.009: INFO: Number of nodes with available pods: 0 Feb 17 12:57:54.009: INFO: Node iruya-node is running more than one daemon pod Feb 17 12:57:55.007: INFO: Number of nodes with available pods: 1 Feb 17 12:57:55.007: INFO: Node iruya-node is running more than one daemon pod Feb 17 12:57:56.006: INFO: Number of nodes with available pods: 2 Feb 17 12:57:56.006: INFO: Number of running nodes: 2, number of available pods: 2 Feb 17 12:57:56.006: INFO: Update the DaemonSet to trigger a rollout Feb 17 12:57:56.018: INFO: Updating DaemonSet daemon-set Feb 17 12:58:03.073: INFO: Roll back the DaemonSet before rollout is complete Feb 17 12:58:03.553: INFO: Updating DaemonSet daemon-set Feb 17 12:58:03.554: INFO: Make sure DaemonSet rollback is complete Feb 17 12:58:03.576: INFO: Wrong image for pod: daemon-set-txp2b. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Feb 17 12:58:03.576: INFO: Pod daemon-set-txp2b is not available Feb 17 12:58:04.622: INFO: Wrong image for pod: daemon-set-txp2b. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Feb 17 12:58:04.622: INFO: Pod daemon-set-txp2b is not available Feb 17 12:58:05.623: INFO: Wrong image for pod: daemon-set-txp2b. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Feb 17 12:58:05.623: INFO: Pod daemon-set-txp2b is not available Feb 17 12:58:06.640: INFO: Wrong image for pod: daemon-set-txp2b. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Feb 17 12:58:06.640: INFO: Pod daemon-set-txp2b is not available Feb 17 12:58:07.685: INFO: Wrong image for pod: daemon-set-txp2b. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Feb 17 12:58:07.685: INFO: Pod daemon-set-txp2b is not available Feb 17 12:58:08.635: INFO: Pod daemon-set-c44sg is not available [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-533, will wait for the garbage collector to delete the pods Feb 17 12:58:08.725: INFO: Deleting DaemonSet.extensions daemon-set took: 12.652112ms Feb 17 12:58:09.625: INFO: Terminating DaemonSet.extensions daemon-set pods took: 900.744122ms Feb 17 12:58:26.635: INFO: Number of nodes with available pods: 0 Feb 17 12:58:26.635: INFO: Number of running nodes: 0, number of available pods: 0 Feb 17 12:58:26.644: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-533/daemonsets","resourceVersion":"24694319"},"items":null} Feb 17 12:58:26.650: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-533/pods","resourceVersion":"24694319"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 12:58:26.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-533" for this suite. Feb 17 12:58:32.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:58:32.854: INFO: namespace daemonsets-533 deletion completed in 6.179534605s • [SLOW TEST:49.201 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 12:58:32.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Feb 17 12:58:33.664: INFO: Pod name wrapped-volume-race-9f4d002d-0e08-43a2-9184-358fdcef4bcb: Found 0 pods out of 5 Feb 17 12:58:38.677: INFO: Pod name wrapped-volume-race-9f4d002d-0e08-43a2-9184-358fdcef4bcb: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-9f4d002d-0e08-43a2-9184-358fdcef4bcb in namespace emptydir-wrapper-1164, will wait for the garbage collector to delete the pods Feb 17 12:59:06.891: INFO: Deleting ReplicationController wrapped-volume-race-9f4d002d-0e08-43a2-9184-358fdcef4bcb took: 14.69228ms Feb 17 12:59:07.491: INFO: Terminating ReplicationController wrapped-volume-race-9f4d002d-0e08-43a2-9184-358fdcef4bcb pods took: 600.462834ms STEP: Creating RC which spawns configmap-volume pods Feb 17 12:59:57.135: INFO: Pod name wrapped-volume-race-a3d0a9ae-f048-4952-ad9d-56fc6fa02316: Found 0 pods out of 5 Feb 17 13:00:02.192: INFO: Pod name wrapped-volume-race-a3d0a9ae-f048-4952-ad9d-56fc6fa02316: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-a3d0a9ae-f048-4952-ad9d-56fc6fa02316 in namespace emptydir-wrapper-1164, will wait for the garbage collector to delete the pods Feb 17 13:00:36.334: INFO: Deleting ReplicationController wrapped-volume-race-a3d0a9ae-f048-4952-ad9d-56fc6fa02316 took: 33.548165ms Feb 17 13:00:36.734: INFO: Terminating ReplicationController wrapped-volume-race-a3d0a9ae-f048-4952-ad9d-56fc6fa02316 pods took: 400.423027ms STEP: Creating RC which spawns configmap-volume pods Feb 17 13:01:26.824: INFO: Pod name wrapped-volume-race-e26acc08-934b-4d53-ad6d-fb525f28404f: Found 0 pods out of 5 Feb 17 13:01:31.870: INFO: Pod name wrapped-volume-race-e26acc08-934b-4d53-ad6d-fb525f28404f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-e26acc08-934b-4d53-ad6d-fb525f28404f in namespace emptydir-wrapper-1164, will wait for the garbage collector to delete the pods Feb 17 13:02:08.055: INFO: Deleting ReplicationController wrapped-volume-race-e26acc08-934b-4d53-ad6d-fb525f28404f took: 22.228958ms Feb 17 13:02:08.356: INFO: Terminating ReplicationController wrapped-volume-race-e26acc08-934b-4d53-ad6d-fb525f28404f pods took: 300.554464ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 13:02:57.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1164" for this suite. Feb 17 13:03:07.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 13:03:08.324: INFO: namespace emptydir-wrapper-1164 deletion completed in 10.589643872s • [SLOW TEST:275.470 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 13:03:08.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-b4fb452f-e555-4266-863b-50e6c3a2f212 STEP: Creating a pod to test consume configMaps Feb 17 13:03:08.527: INFO: Waiting up to 5m0s for pod "pod-configmaps-4e122df0-3018-40d9-935e-8b25afda490b" in namespace "configmap-8534" to be "success or failure" Feb 17 13:03:08.549: INFO: Pod "pod-configmaps-4e122df0-3018-40d9-935e-8b25afda490b": Phase="Pending", Reason="", readiness=false. Elapsed: 22.535926ms Feb 17 13:03:10.563: INFO: Pod "pod-configmaps-4e122df0-3018-40d9-935e-8b25afda490b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03642375s Feb 17 13:03:12.577: INFO: Pod "pod-configmaps-4e122df0-3018-40d9-935e-8b25afda490b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05046423s Feb 17 13:03:14.607: INFO: Pod "pod-configmaps-4e122df0-3018-40d9-935e-8b25afda490b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080772777s Feb 17 13:03:16.623: INFO: Pod "pod-configmaps-4e122df0-3018-40d9-935e-8b25afda490b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.096133895s Feb 17 13:03:18.630: INFO: Pod "pod-configmaps-4e122df0-3018-40d9-935e-8b25afda490b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.103306638s Feb 17 13:03:20.640: INFO: Pod "pod-configmaps-4e122df0-3018-40d9-935e-8b25afda490b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.113042023s Feb 17 13:03:22.653: INFO: Pod "pod-configmaps-4e122df0-3018-40d9-935e-8b25afda490b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.125921592s Feb 17 13:03:24.663: INFO: Pod "pod-configmaps-4e122df0-3018-40d9-935e-8b25afda490b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.136448521s Feb 17 13:03:26.682: INFO: Pod "pod-configmaps-4e122df0-3018-40d9-935e-8b25afda490b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.1550717s STEP: Saw pod success Feb 17 13:03:26.682: INFO: Pod "pod-configmaps-4e122df0-3018-40d9-935e-8b25afda490b" satisfied condition "success or failure" Feb 17 13:03:26.687: INFO: Trying to get logs from node iruya-node pod pod-configmaps-4e122df0-3018-40d9-935e-8b25afda490b container configmap-volume-test: STEP: delete the pod Feb 17 13:03:26.807: INFO: Waiting for pod pod-configmaps-4e122df0-3018-40d9-935e-8b25afda490b to disappear Feb 17 13:03:26.812: INFO: Pod pod-configmaps-4e122df0-3018-40d9-935e-8b25afda490b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 13:03:26.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8534" for this suite. Feb 17 13:03:32.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 13:03:33.031: INFO: namespace configmap-8534 deletion completed in 6.216023251s • [SLOW TEST:24.707 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 13:03:33.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-3282 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 17 13:03:33.196: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 17 13:04:15.527: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3282 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 17 13:04:15.527: INFO: >>> kubeConfig: /root/.kube/config I0217 13:04:15.597249 8 log.go:172] (0xc0013cea50) (0xc0009f5ae0) Create stream I0217 13:04:15.597298 8 log.go:172] (0xc0013cea50) (0xc0009f5ae0) Stream added, broadcasting: 1 I0217 13:04:15.602939 8 log.go:172] (0xc0013cea50) Reply frame received for 1 I0217 13:04:15.603006 8 log.go:172] (0xc0013cea50) (0xc001ace000) Create stream I0217 13:04:15.603021 8 log.go:172] (0xc0013cea50) (0xc001ace000) Stream added, broadcasting: 3 I0217 13:04:15.604325 8 log.go:172] (0xc0013cea50) Reply frame received for 3 I0217 13:04:15.604358 8 log.go:172] (0xc0013cea50) (0xc001ace0a0) Create stream I0217 13:04:15.604413 8 log.go:172] (0xc0013cea50) (0xc001ace0a0) Stream added, broadcasting: 5 I0217 13:04:15.606069 8 log.go:172] (0xc0013cea50) Reply frame received for 5 I0217 13:04:15.995006 8 log.go:172] (0xc0013cea50) Data frame received for 3 I0217 13:04:15.995083 8 log.go:172] (0xc001ace000) (3) Data frame handling I0217 13:04:15.995136 8 log.go:172] (0xc001ace000) (3) Data frame sent I0217 13:04:16.155881 8 log.go:172] (0xc0013cea50) Data frame received for 1 I0217 13:04:16.156033 8 log.go:172] (0xc0009f5ae0) (1) Data frame handling I0217 13:04:16.156076 8 log.go:172] (0xc0009f5ae0) (1) Data frame sent I0217 13:04:16.156332 8 log.go:172] (0xc0013cea50) (0xc001ace0a0) Stream removed, broadcasting: 5 I0217 13:04:16.156389 8 log.go:172] (0xc0013cea50) (0xc0009f5ae0) Stream removed, broadcasting: 1 I0217 13:04:16.156572 8 log.go:172] (0xc0013cea50) (0xc001ace000) Stream removed, broadcasting: 3 I0217 13:04:16.156637 8 log.go:172] (0xc0013cea50) (0xc0009f5ae0) Stream removed, broadcasting: 1 I0217 13:04:16.156657 8 log.go:172] (0xc0013cea50) (0xc001ace000) Stream removed, broadcasting: 3 I0217 13:04:16.156676 8 log.go:172] (0xc0013cea50) (0xc001ace0a0) Stream removed, broadcasting: 5 I0217 13:04:16.157171 8 log.go:172] (0xc0013cea50) Go away received Feb 17 13:04:16.157: INFO: Found all expected endpoints: [netserver-0] Feb 17 13:04:16.169: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3282 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 17 13:04:16.169: INFO: >>> kubeConfig: /root/.kube/config I0217 13:04:16.245615 8 log.go:172] (0xc0013cf080) (0xc0012240a0) Create stream I0217 13:04:16.245715 8 log.go:172] (0xc0013cf080) (0xc0012240a0) Stream added, broadcasting: 1 I0217 13:04:16.254150 8 log.go:172] (0xc0013cf080) Reply frame received for 1 I0217 13:04:16.254223 8 log.go:172] (0xc0013cf080) (0xc001ace140) Create stream I0217 13:04:16.254244 8 log.go:172] (0xc0013cf080) (0xc001ace140) Stream added, broadcasting: 3 I0217 13:04:16.256240 8 log.go:172] (0xc0013cf080) Reply frame received for 3 I0217 13:04:16.256292 8 log.go:172] (0xc0013cf080) (0xc00096b0e0) Create stream I0217 13:04:16.256312 8 log.go:172] (0xc0013cf080) (0xc00096b0e0) Stream added, broadcasting: 5 I0217 13:04:16.258091 8 log.go:172] (0xc0013cf080) Reply frame received for 5 I0217 13:04:16.364080 8 log.go:172] (0xc0013cf080) Data frame received for 3 I0217 13:04:16.364161 8 log.go:172] (0xc001ace140) (3) Data frame handling I0217 13:04:16.364191 8 log.go:172] (0xc001ace140) (3) Data frame sent I0217 13:04:16.465853 8 log.go:172] (0xc0013cf080) Data frame received for 1 I0217 13:04:16.465939 8 log.go:172] (0xc0013cf080) (0xc001ace140) Stream removed, broadcasting: 3 I0217 13:04:16.465979 8 log.go:172] (0xc0012240a0) (1) Data frame handling I0217 13:04:16.466001 8 log.go:172] (0xc0012240a0) (1) Data frame sent I0217 13:04:16.466009 8 log.go:172] (0xc0013cf080) (0xc0012240a0) Stream removed, broadcasting: 1 I0217 13:04:16.466581 8 log.go:172] (0xc0013cf080) (0xc00096b0e0) Stream removed, broadcasting: 5 I0217 13:04:16.466641 8 log.go:172] (0xc0013cf080) (0xc0012240a0) Stream removed, broadcasting: 1 I0217 13:04:16.466655 8 log.go:172] (0xc0013cf080) (0xc001ace140) Stream removed, broadcasting: 3 I0217 13:04:16.466667 8 log.go:172] (0xc0013cf080) (0xc00096b0e0) Stream removed, broadcasting: 5 Feb 17 13:04:16.466: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 13:04:16.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0217 13:04:16.467196 8 log.go:172] (0xc0013cf080) Go away received STEP: Destroying namespace "pod-network-test-3282" for this suite. Feb 17 13:04:40.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 13:04:40.664: INFO: namespace pod-network-test-3282 deletion completed in 24.18811251s • [SLOW TEST:67.632 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 13:04:40.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-9920043f-6f6f-4a5e-95a8-a91e4b118962 in namespace container-probe-6307 Feb 17 13:04:50.769: INFO: Started pod busybox-9920043f-6f6f-4a5e-95a8-a91e4b118962 in namespace container-probe-6307 STEP: checking the pod's current state and verifying that restartCount is present Feb 17 13:04:50.773: INFO: Initial restart count of pod busybox-9920043f-6f6f-4a5e-95a8-a91e4b118962 is 0 Feb 17 13:05:43.207: INFO: Restart count of pod container-probe-6307/busybox-9920043f-6f6f-4a5e-95a8-a91e4b118962 is now 1 (52.433973599s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 13:05:43.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6307" for this suite. Feb 17 13:05:49.305: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 13:05:49.446: INFO: namespace container-probe-6307 deletion completed in 6.164168992s • [SLOW TEST:68.781 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 13:05:49.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Feb 17 13:05:49.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1568' Feb 17 13:05:51.675: INFO: stderr: "" Feb 17 13:05:51.676: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 17 13:05:51.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1568' Feb 17 13:05:51.785: INFO: stderr: "" Feb 17 13:05:51.785: INFO: stdout: "update-demo-nautilus-6g7ds update-demo-nautilus-xngx2 " Feb 17 13:05:51.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6g7ds -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1568' Feb 17 13:05:52.056: INFO: stderr: "" Feb 17 13:05:52.056: INFO: stdout: "" Feb 17 13:05:52.056: INFO: update-demo-nautilus-6g7ds is created but not running Feb 17 13:05:57.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1568' Feb 17 13:05:57.135: INFO: stderr: "" Feb 17 13:05:57.135: INFO: stdout: "update-demo-nautilus-6g7ds update-demo-nautilus-xngx2 " Feb 17 13:05:57.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6g7ds -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1568' Feb 17 13:05:57.207: INFO: stderr: "" Feb 17 13:05:57.207: INFO: stdout: "" Feb 17 13:05:57.207: INFO: update-demo-nautilus-6g7ds is created but not running Feb 17 13:06:02.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1568' Feb 17 13:06:02.349: INFO: stderr: "" Feb 17 13:06:02.349: INFO: stdout: "update-demo-nautilus-6g7ds update-demo-nautilus-xngx2 " Feb 17 13:06:02.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6g7ds -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1568' Feb 17 13:06:02.453: INFO: stderr: "" Feb 17 13:06:02.453: INFO: stdout: "" Feb 17 13:06:02.453: INFO: update-demo-nautilus-6g7ds is created but not running Feb 17 13:06:07.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1568' Feb 17 13:06:07.585: INFO: stderr: "" Feb 17 13:06:07.585: INFO: stdout: "update-demo-nautilus-6g7ds update-demo-nautilus-xngx2 " Feb 17 13:06:07.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6g7ds -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1568' Feb 17 13:06:07.686: INFO: stderr: "" Feb 17 13:06:07.686: INFO: stdout: "true" Feb 17 13:06:07.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6g7ds -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1568' Feb 17 13:06:07.785: INFO: stderr: "" Feb 17 13:06:07.785: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 17 13:06:07.785: INFO: validating pod update-demo-nautilus-6g7ds Feb 17 13:06:07.877: INFO: got data: { "image": "nautilus.jpg" } Feb 17 13:06:07.877: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 17 13:06:07.877: INFO: update-demo-nautilus-6g7ds is verified up and running Feb 17 13:06:07.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xngx2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1568' Feb 17 13:06:07.977: INFO: stderr: "" Feb 17 13:06:07.977: INFO: stdout: "true" Feb 17 13:06:07.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xngx2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1568' Feb 17 13:06:08.065: INFO: stderr: "" Feb 17 13:06:08.066: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 17 13:06:08.066: INFO: validating pod update-demo-nautilus-xngx2 Feb 17 13:06:08.091: INFO: got data: { "image": "nautilus.jpg" } Feb 17 13:06:08.091: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 17 13:06:08.091: INFO: update-demo-nautilus-xngx2 is verified up and running STEP: scaling down the replication controller Feb 17 13:06:08.093: INFO: scanned /root for discovery docs: Feb 17 13:06:08.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-1568' Feb 17 13:06:09.223: INFO: stderr: "" Feb 17 13:06:09.223: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 17 13:06:09.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1568' Feb 17 13:06:09.372: INFO: stderr: "" Feb 17 13:06:09.372: INFO: stdout: "update-demo-nautilus-6g7ds update-demo-nautilus-xngx2 " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 17 13:06:14.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1568' Feb 17 13:06:14.486: INFO: stderr: "" Feb 17 13:06:14.486: INFO: stdout: "update-demo-nautilus-6g7ds update-demo-nautilus-xngx2 " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 17 13:06:19.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1568' Feb 17 13:06:19.575: INFO: stderr: "" Feb 17 13:06:19.575: INFO: stdout: "update-demo-nautilus-xngx2 " Feb 17 13:06:19.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xngx2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1568' Feb 17 13:06:19.673: INFO: stderr: "" Feb 17 13:06:19.673: INFO: stdout: "true" Feb 17 13:06:19.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xngx2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1568' Feb 17 13:06:19.753: INFO: stderr: "" Feb 17 13:06:19.753: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 17 13:06:19.753: INFO: validating pod update-demo-nautilus-xngx2 Feb 17 13:06:19.757: INFO: got data: { "image": "nautilus.jpg" } Feb 17 13:06:19.757: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 17 13:06:19.757: INFO: update-demo-nautilus-xngx2 is verified up and running STEP: scaling up the replication controller Feb 17 13:06:19.759: INFO: scanned /root for discovery docs: Feb 17 13:06:19.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-1568' Feb 17 13:06:20.892: INFO: stderr: "" Feb 17 13:06:20.892: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 17 13:06:20.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1568' Feb 17 13:06:21.018: INFO: stderr: "" Feb 17 13:06:21.018: INFO: stdout: "update-demo-nautilus-v4nrf update-demo-nautilus-xngx2 " Feb 17 13:06:21.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v4nrf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1568' Feb 17 13:06:21.108: INFO: stderr: "" Feb 17 13:06:21.108: INFO: stdout: "" Feb 17 13:06:21.108: INFO: update-demo-nautilus-v4nrf is created but not running Feb 17 13:06:26.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1568' Feb 17 13:06:26.749: INFO: stderr: "" Feb 17 13:06:26.749: INFO: stdout: "update-demo-nautilus-v4nrf update-demo-nautilus-xngx2 " Feb 17 13:06:26.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v4nrf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1568' Feb 17 13:06:26.887: INFO: stderr: "" Feb 17 13:06:26.887: INFO: stdout: "" Feb 17 13:06:26.888: INFO: update-demo-nautilus-v4nrf is created but not running Feb 17 13:06:31.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1568' Feb 17 13:06:32.023: INFO: stderr: "" Feb 17 13:06:32.023: INFO: stdout: "update-demo-nautilus-v4nrf update-demo-nautilus-xngx2 " Feb 17 13:06:32.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v4nrf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1568' Feb 17 13:06:32.124: INFO: stderr: "" Feb 17 13:06:32.124: INFO: stdout: "true" Feb 17 13:06:32.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v4nrf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1568' Feb 17 13:06:32.217: INFO: stderr: "" Feb 17 13:06:32.217: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 17 13:06:32.217: INFO: validating pod update-demo-nautilus-v4nrf Feb 17 13:06:32.233: INFO: got data: { "image": "nautilus.jpg" } Feb 17 13:06:32.233: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 17 13:06:32.233: INFO: update-demo-nautilus-v4nrf is verified up and running Feb 17 13:06:32.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xngx2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1568' Feb 17 13:06:32.313: INFO: stderr: "" Feb 17 13:06:32.313: INFO: stdout: "true" Feb 17 13:06:32.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xngx2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1568' Feb 17 13:06:32.391: INFO: stderr: "" Feb 17 13:06:32.391: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 17 13:06:32.391: INFO: validating pod update-demo-nautilus-xngx2 Feb 17 13:06:32.395: INFO: got data: { "image": "nautilus.jpg" } Feb 17 13:06:32.395: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 17 13:06:32.395: INFO: update-demo-nautilus-xngx2 is verified up and running STEP: using delete to clean up resources Feb 17 13:06:32.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1568' Feb 17 13:06:32.489: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 17 13:06:32.489: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 17 13:06:32.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1568' Feb 17 13:06:32.594: INFO: stderr: "No resources found.\n" Feb 17 13:06:32.594: INFO: stdout: "" Feb 17 13:06:32.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1568 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 17 13:06:32.703: INFO: stderr: "" Feb 17 13:06:32.704: INFO: stdout: "update-demo-nautilus-v4nrf\nupdate-demo-nautilus-xngx2\n" Feb 17 13:06:33.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1568' Feb 17 13:06:33.299: INFO: stderr: "No resources found.\n" Feb 17 13:06:33.299: INFO: stdout: "" Feb 17 13:06:33.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1568 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 17 13:06:33.421: INFO: stderr: "" Feb 17 13:06:33.421: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 13:06:33.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1568" for this suite. Feb 17 13:06:39.723: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 13:06:39.902: INFO: namespace kubectl-1568 deletion completed in 6.471602012s • [SLOW TEST:50.455 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 13:06:39.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 17 13:06:40.121: INFO: Creating deployment "nginx-deployment" Feb 17 13:06:40.210: INFO: Waiting for observed generation 1 Feb 17 13:06:42.643: INFO: Waiting for all required pods to come up Feb 17 13:06:44.203: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Feb 17 13:07:12.232: INFO: Waiting for deployment "nginx-deployment" to complete Feb 17 13:07:12.241: INFO: Updating deployment "nginx-deployment" with a non-existent image Feb 17 13:07:12.252: INFO: Updating deployment nginx-deployment Feb 17 13:07:12.252: INFO: Waiting for observed generation 2 Feb 17 13:07:14.674: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Feb 17 13:07:15.205: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Feb 17 13:07:15.269: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Feb 17 13:07:15.405: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Feb 17 13:07:15.405: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Feb 17 13:07:15.407: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Feb 17 13:07:15.413: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Feb 17 13:07:15.413: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Feb 17 13:07:15.432: INFO: Updating deployment nginx-deployment Feb 17 13:07:15.432: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Feb 17 13:07:16.423: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Feb 17 13:07:16.614: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Feb 17 13:07:16.668: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-8536,SelfLink:/apis/apps/v1/namespaces/deployment-8536/deployments/nginx-deployment,UID:4f073755-0585-43c3-9c74-4dac2c133d8f,ResourceVersion:24696268,Generation:3,CreationTimestamp:2020-02-17 13:06:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-02-17 13:07:12 +0000 UTC 2020-02-17 13:06:40 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-02-17 13:07:16 +0000 UTC 2020-02-17 13:07:16 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Feb 17 13:07:16.875: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-8536,SelfLink:/apis/apps/v1/namespaces/deployment-8536/replicasets/nginx-deployment-55fb7cb77f,UID:96b4e0b2-f627-47fe-a9ae-076897b407a6,ResourceVersion:24696260,Generation:3,CreationTimestamp:2020-02-17 13:07:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 4f073755-0585-43c3-9c74-4dac2c133d8f 0xc0009f7cf7 0xc0009f7cf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 17 13:07:16.875: INFO: All old ReplicaSets of Deployment "nginx-deployment": Feb 17 13:07:16.875: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-8536,SelfLink:/apis/apps/v1/namespaces/deployment-8536/replicasets/nginx-deployment-7b8c6f4498,UID:dae76a75-f865-41a8-9168-ca8add101e06,ResourceVersion:24696258,Generation:3,CreationTimestamp:2020-02-17 13:06:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 4f073755-0585-43c3-9c74-4dac2c133d8f 0xc0009f7e77 0xc0009f7e78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Feb 17 13:07:18.189: INFO: Pod "nginx-deployment-55fb7cb77f-4657m" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-4657m,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8536,SelfLink:/api/v1/namespaces/deployment-8536/pods/nginx-deployment-55fb7cb77f-4657m,UID:7ff48f06-d35a-4e90-96e0-0da848728239,ResourceVersion:24696253,Generation:0,CreationTimestamp:2020-02-17 13:07:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 96b4e0b2-f627-47fe-a9ae-076897b407a6 0xc0020d4927 0xc0020d4928}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpb2z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpb2z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-jpb2z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020d49a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020d49c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:12 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-17 13:07:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 13:07:18.189: INFO: Pod "nginx-deployment-55fb7cb77f-4rzc4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-4rzc4,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8536,SelfLink:/api/v1/namespaces/deployment-8536/pods/nginx-deployment-55fb7cb77f-4rzc4,UID:2fcb9a32-c6ff-4ccc-8575-90f16ef94a04,ResourceVersion:24696296,Generation:0,CreationTimestamp:2020-02-17 13:07:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 96b4e0b2-f627-47fe-a9ae-076897b407a6 0xc0020d4a97 0xc0020d4a98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpb2z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpb2z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-jpb2z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020d4b00} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020d4b20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 13:07:18.190: INFO: Pod "nginx-deployment-55fb7cb77f-54rxr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-54rxr,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8536,SelfLink:/api/v1/namespaces/deployment-8536/pods/nginx-deployment-55fb7cb77f-54rxr,UID:33a52373-5e1f-475b-8515-075ad61e53b0,ResourceVersion:24696248,Generation:0,CreationTimestamp:2020-02-17 13:07:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 96b4e0b2-f627-47fe-a9ae-076897b407a6 0xc0020d4b90 0xc0020d4b91}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpb2z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpb2z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-jpb2z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020d4c10} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020d4c30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:12 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-17 13:07:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 13:07:18.190: INFO: Pod "nginx-deployment-55fb7cb77f-94zjl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-94zjl,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8536,SelfLink:/api/v1/namespaces/deployment-8536/pods/nginx-deployment-55fb7cb77f-94zjl,UID:db1e9dcd-074c-4cfa-b61f-92d11c5e88c4,ResourceVersion:24696298,Generation:0,CreationTimestamp:2020-02-17 13:07:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 96b4e0b2-f627-47fe-a9ae-076897b407a6 0xc0020d4d07 0xc0020d4d08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpb2z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpb2z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-jpb2z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020d4d80} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020d4da0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:16 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 13:07:18.190: INFO: Pod "nginx-deployment-55fb7cb77f-9nqlf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-9nqlf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8536,SelfLink:/api/v1/namespaces/deployment-8536/pods/nginx-deployment-55fb7cb77f-9nqlf,UID:f00c6d26-d6f3-45de-8cf4-591aed2a9ab5,ResourceVersion:24696226,Generation:0,CreationTimestamp:2020-02-17 13:07:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 96b4e0b2-f627-47fe-a9ae-076897b407a6 0xc0020d4e27 0xc0020d4e28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpb2z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpb2z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-jpb2z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020d4e90} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020d4eb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:12 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-17 13:07:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 13:07:18.190: INFO: Pod "nginx-deployment-55fb7cb77f-bnz8w" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-bnz8w,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8536,SelfLink:/api/v1/namespaces/deployment-8536/pods/nginx-deployment-55fb7cb77f-bnz8w,UID:5a4ce626-acc3-4eee-8f4f-d40f6a300ba4,ResourceVersion:24696256,Generation:0,CreationTimestamp:2020-02-17 13:07:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 96b4e0b2-f627-47fe-a9ae-076897b407a6 0xc0020d4f87 0xc0020d4f88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpb2z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpb2z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-jpb2z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020d5000} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020d5020}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:12 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-17 13:07:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 13:07:18.190: INFO: Pod "nginx-deployment-55fb7cb77f-g49tj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-g49tj,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8536,SelfLink:/api/v1/namespaces/deployment-8536/pods/nginx-deployment-55fb7cb77f-g49tj,UID:92047035-5007-477f-9259-d7ee60a30fc6,ResourceVersion:24696290,Generation:0,CreationTimestamp:2020-02-17 13:07:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 96b4e0b2-f627-47fe-a9ae-076897b407a6 0xc0020d50f7 0xc0020d50f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpb2z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpb2z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-jpb2z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020d5170} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020d5190}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:16 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 13:07:18.190: INFO: Pod "nginx-deployment-55fb7cb77f-gppfg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-gppfg,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8536,SelfLink:/api/v1/namespaces/deployment-8536/pods/nginx-deployment-55fb7cb77f-gppfg,UID:5031edfa-47c0-46b4-9552-3ae041bd1db8,ResourceVersion:24696300,Generation:0,CreationTimestamp:2020-02-17 13:07:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 96b4e0b2-f627-47fe-a9ae-076897b407a6 0xc0020d5217 0xc0020d5218}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpb2z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpb2z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-jpb2z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020d5290} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020d52b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:16 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 13:07:18.191: INFO: Pod "nginx-deployment-55fb7cb77f-hd7dc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hd7dc,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8536,SelfLink:/api/v1/namespaces/deployment-8536/pods/nginx-deployment-55fb7cb77f-hd7dc,UID:c9c39439-a9c0-4b48-95ef-fecda38538f6,ResourceVersion:24696304,Generation:0,CreationTimestamp:2020-02-17 13:07:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 96b4e0b2-f627-47fe-a9ae-076897b407a6 0xc0020d5337 0xc0020d5338}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpb2z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpb2z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-jpb2z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020d53a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020d53c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:16 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 13:07:18.191: INFO: Pod "nginx-deployment-55fb7cb77f-jp72k" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-jp72k,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8536,SelfLink:/api/v1/namespaces/deployment-8536/pods/nginx-deployment-55fb7cb77f-jp72k,UID:c2a3b056-ece7-44c0-adf4-d70559eaa22e,ResourceVersion:24696277,Generation:0,CreationTimestamp:2020-02-17 13:07:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 96b4e0b2-f627-47fe-a9ae-076897b407a6 0xc0020d5447 0xc0020d5448}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpb2z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpb2z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-jpb2z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020d54b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020d54d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:16 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 13:07:18.191: INFO: Pod "nginx-deployment-55fb7cb77f-p9hz5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-p9hz5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8536,SelfLink:/api/v1/namespaces/deployment-8536/pods/nginx-deployment-55fb7cb77f-p9hz5,UID:2be42dc7-6ff1-4424-8c36-3855c3a963ed,ResourceVersion:24696302,Generation:0,CreationTimestamp:2020-02-17 13:07:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 96b4e0b2-f627-47fe-a9ae-076897b407a6 0xc0020d5567 0xc0020d5568}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpb2z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpb2z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-jpb2z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020d55e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020d5600}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:16 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 13:07:18.191: INFO: Pod "nginx-deployment-55fb7cb77f-scnvx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-scnvx,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8536,SelfLink:/api/v1/namespaces/deployment-8536/pods/nginx-deployment-55fb7cb77f-scnvx,UID:a7b02830-57d0-4244-a5aa-df13af69267e,ResourceVersion:24696292,Generation:0,CreationTimestamp:2020-02-17 13:07:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 96b4e0b2-f627-47fe-a9ae-076897b407a6 0xc0020d5697 0xc0020d5698}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpb2z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpb2z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-jpb2z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020d5700} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020d5720}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:16 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 13:07:18.191: INFO: Pod "nginx-deployment-55fb7cb77f-vfqj2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-vfqj2,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8536,SelfLink:/api/v1/namespaces/deployment-8536/pods/nginx-deployment-55fb7cb77f-vfqj2,UID:5943d20e-dc68-4ec3-a0ce-d7501e7952a8,ResourceVersion:24696251,Generation:0,CreationTimestamp:2020-02-17 13:07:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 96b4e0b2-f627-47fe-a9ae-076897b407a6 0xc0020d57b7 0xc0020d57b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpb2z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpb2z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-jpb2z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020d5820} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020d5840}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:12 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-17 13:07:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 13:07:18.191: INFO: Pod "nginx-deployment-7b8c6f4498-6jrnq" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6jrnq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8536,SelfLink:/api/v1/namespaces/deployment-8536/pods/nginx-deployment-7b8c6f4498-6jrnq,UID:a0ea47d1-c311-4b60-a8d2-16c468cf606a,ResourceVersion:24696192,Generation:0,CreationTimestamp:2020-02-17 13:06:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dae76a75-f865-41a8-9168-ca8add101e06 0xc0020d5917 0xc0020d5918}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpb2z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpb2z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jpb2z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020d59c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020d59e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:06:44 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:11 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:06:40 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.4,StartTime:2020-02-17 13:06:44 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-17 13:07:09 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://675ac70da7d92c7488068ebe97234667d47857b72b62c59c4f8b64f5773fae0d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 13:07:18.191: INFO: Pod "nginx-deployment-7b8c6f4498-7kz6j" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7kz6j,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8536,SelfLink:/api/v1/namespaces/deployment-8536/pods/nginx-deployment-7b8c6f4498-7kz6j,UID:4ad4e274-527e-4ec7-ba0d-28df602819e7,ResourceVersion:24696150,Generation:0,CreationTimestamp:2020-02-17 13:06:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dae76a75-f865-41a8-9168-ca8add101e06 0xc0020d5b07 0xc0020d5b08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpb2z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpb2z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jpb2z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020d5bc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020d5be0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:06:40 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:06:40 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.5,StartTime:2020-02-17 13:06:40 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-17 13:07:01 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://cb20181cc0cb96b9bb4f5f7df337815be8a2bd2ad03a70635b0628adf732ab56}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 13:07:18.192: INFO: Pod "nginx-deployment-7b8c6f4498-8lqtj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8lqtj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8536,SelfLink:/api/v1/namespaces/deployment-8536/pods/nginx-deployment-7b8c6f4498-8lqtj,UID:c498541f-6e5d-41ae-b015-515413ca0700,ResourceVersion:24696293,Generation:0,CreationTimestamp:2020-02-17 13:07:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dae76a75-f865-41a8-9168-ca8add101e06 0xc0020d5cb7 0xc0020d5cb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpb2z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpb2z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jpb2z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020d5d30} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020d5d50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:16 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 13:07:18.192: INFO: Pod "nginx-deployment-7b8c6f4498-8zsq2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8zsq2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8536,SelfLink:/api/v1/namespaces/deployment-8536/pods/nginx-deployment-7b8c6f4498-8zsq2,UID:fe54b856-d94f-42b7-9c7f-f1a1b4fa2cef,ResourceVersion:24696289,Generation:0,CreationTimestamp:2020-02-17 13:07:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dae76a75-f865-41a8-9168-ca8add101e06 0xc0020d5dd7 0xc0020d5dd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpb2z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpb2z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jpb2z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020d5e60} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020d5e80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:16 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 13:07:18.192: INFO: Pod "nginx-deployment-7b8c6f4498-9vm8p" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9vm8p,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8536,SelfLink:/api/v1/namespaces/deployment-8536/pods/nginx-deployment-7b8c6f4498-9vm8p,UID:0ad27f3d-3456-473c-8be3-6015733502d8,ResourceVersion:24696291,Generation:0,CreationTimestamp:2020-02-17 13:07:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dae76a75-f865-41a8-9168-ca8add101e06 0xc0020d5f07 0xc0020d5f08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpb2z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpb2z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jpb2z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020d5f70} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020d5f90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:16 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 13:07:18.192: INFO: Pod "nginx-deployment-7b8c6f4498-bsznt" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bsznt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8536,SelfLink:/api/v1/namespaces/deployment-8536/pods/nginx-deployment-7b8c6f4498-bsznt,UID:9d04c2df-0e51-4220-9aae-ae03a284da1c,ResourceVersion:24696156,Generation:0,CreationTimestamp:2020-02-17 13:06:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dae76a75-f865-41a8-9168-ca8add101e06 0xc002082017 0xc002082018}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpb2z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpb2z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jpb2z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002082080} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020820a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:06:40 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:06:40 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.7,StartTime:2020-02-17 13:06:40 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-17 13:07:01 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://a9f4c770b6fcc100f6238975cbb0d34f4c67e7a7a69584c2273664fecf4aae4d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 13:07:18.192: INFO: Pod "nginx-deployment-7b8c6f4498-d2h72" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-d2h72,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8536,SelfLink:/api/v1/namespaces/deployment-8536/pods/nginx-deployment-7b8c6f4498-d2h72,UID:510206d8-114d-4622-a3db-05ddcc9f329d,ResourceVersion:24696306,Generation:0,CreationTimestamp:2020-02-17 13:07:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dae76a75-f865-41a8-9168-ca8add101e06 0xc002082187 0xc002082188}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpb2z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpb2z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jpb2z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002082200} {node.kubernetes.io/unreachable Exists NoExecute 0xc002082220}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:16 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 13:07:18.193: INFO: Pod "nginx-deployment-7b8c6f4498-djpfx" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-djpfx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8536,SelfLink:/api/v1/namespaces/deployment-8536/pods/nginx-deployment-7b8c6f4498-djpfx,UID:93ed03bc-9ed1-424b-a420-af51859cbbc2,ResourceVersion:24696153,Generation:0,CreationTimestamp:2020-02-17 13:06:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dae76a75-f865-41a8-9168-ca8add101e06 0xc0020822a7 0xc0020822a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpb2z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpb2z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jpb2z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002082310} {node.kubernetes.io/unreachable Exists NoExecute 0xc002082330}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:06:40 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:06:40 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-02-17 13:06:40 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-17 13:07:01 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://072eafb16df9012dae3bc7e104aeb0beec00bb0475c76df7c01671793e7f1f40}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 13:07:18.193: INFO: Pod "nginx-deployment-7b8c6f4498-f6thv" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-f6thv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8536,SelfLink:/api/v1/namespaces/deployment-8536/pods/nginx-deployment-7b8c6f4498-f6thv,UID:6c41d2fa-024e-4517-9933-5cdf5f9a2ad3,ResourceVersion:24696189,Generation:0,CreationTimestamp:2020-02-17 13:06:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dae76a75-f865-41a8-9168-ca8add101e06 0xc0020824d7 0xc0020824d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpb2z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpb2z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jpb2z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020825a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020825c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:06:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:11 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:06:40 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.6,StartTime:2020-02-17 13:06:42 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-17 13:07:09 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b7a74a43ed9deb6c2b1edf04e8ee871939f07edeb68c3b35193feafeb235c7b3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 13:07:18.193: INFO: Pod "nginx-deployment-7b8c6f4498-ggxjn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ggxjn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8536,SelfLink:/api/v1/namespaces/deployment-8536/pods/nginx-deployment-7b8c6f4498-ggxjn,UID:97576ccb-f816-4733-8f2d-6ce35a91e284,ResourceVersion:24696183,Generation:0,CreationTimestamp:2020-02-17 13:06:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dae76a75-f865-41a8-9168-ca8add101e06 0xc002082737 0xc002082738}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpb2z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpb2z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jpb2z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020827f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002082830}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:06:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:11 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:06:40 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2020-02-17 13:06:42 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-17 13:07:09 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://31ab1175b57169323d974080e1f3c8b641f9f6435a588b5ea29eb2319499fc3d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 13:07:18.193: INFO: Pod "nginx-deployment-7b8c6f4498-h6ncs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-h6ncs,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8536,SelfLink:/api/v1/namespaces/deployment-8536/pods/nginx-deployment-7b8c6f4498-h6ncs,UID:bbf539b8-f06f-41ed-aa2c-5afeeb0e3109,ResourceVersion:24696299,Generation:0,CreationTimestamp:2020-02-17 13:07:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dae76a75-f865-41a8-9168-ca8add101e06 0xc002082907 0xc002082908}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpb2z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpb2z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jpb2z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002082a20} {node.kubernetes.io/unreachable Exists NoExecute 0xc002082a70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:16 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 13:07:18.193: INFO: Pod "nginx-deployment-7b8c6f4498-kc5sx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kc5sx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8536,SelfLink:/api/v1/namespaces/deployment-8536/pods/nginx-deployment-7b8c6f4498-kc5sx,UID:fa697d9f-707a-4687-a385-8c2d2719fb4f,ResourceVersion:24696294,Generation:0,CreationTimestamp:2020-02-17 13:07:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dae76a75-f865-41a8-9168-ca8add101e06 0xc002082b97 0xc002082b98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpb2z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpb2z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jpb2z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002082c30} {node.kubernetes.io/unreachable Exists NoExecute 0xc002082c50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:16 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 13:07:18.194: INFO: Pod "nginx-deployment-7b8c6f4498-ng9q7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ng9q7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8536,SelfLink:/api/v1/namespaces/deployment-8536/pods/nginx-deployment-7b8c6f4498-ng9q7,UID:82e00b67-21b9-4fcb-be31-2690cd571cba,ResourceVersion:24696307,Generation:0,CreationTimestamp:2020-02-17 13:07:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dae76a75-f865-41a8-9168-ca8add101e06 0xc002082d27 0xc002082d28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpb2z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpb2z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jpb2z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002082df0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002082e10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:16 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 13:07:18.194: INFO: Pod "nginx-deployment-7b8c6f4498-ql692" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ql692,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8536,SelfLink:/api/v1/namespaces/deployment-8536/pods/nginx-deployment-7b8c6f4498-ql692,UID:827f584f-dcee-4a9e-a077-faf8eb512e02,ResourceVersion:24696147,Generation:0,CreationTimestamp:2020-02-17 13:06:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dae76a75-f865-41a8-9168-ca8add101e06 0xc002082f57 0xc002082f58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpb2z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpb2z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jpb2z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002082fe0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002083060}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:06:40 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:06:40 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2020-02-17 13:06:40 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-17 13:07:01 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://61ce911e5c9a3a932fcddff258d3cec1358c8dff977a6cb2f8142fbbf23b5304}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 13:07:18.194: INFO: Pod "nginx-deployment-7b8c6f4498-rl6ml" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rl6ml,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8536,SelfLink:/api/v1/namespaces/deployment-8536/pods/nginx-deployment-7b8c6f4498-rl6ml,UID:7d71b190-473c-4529-8d15-bef6e9551cb0,ResourceVersion:24696301,Generation:0,CreationTimestamp:2020-02-17 13:07:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dae76a75-f865-41a8-9168-ca8add101e06 0xc002083167 0xc002083168}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpb2z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpb2z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jpb2z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020831d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020831f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:16 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 13:07:18.194: INFO: Pod "nginx-deployment-7b8c6f4498-tdkqd" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tdkqd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8536,SelfLink:/api/v1/namespaces/deployment-8536/pods/nginx-deployment-7b8c6f4498-tdkqd,UID:8665c7b5-789e-413a-8001-81a0a090c2c4,ResourceVersion:24696186,Generation:0,CreationTimestamp:2020-02-17 13:06:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dae76a75-f865-41a8-9168-ca8add101e06 0xc002083277 0xc002083278}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpb2z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpb2z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jpb2z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002083470} {node.kubernetes.io/unreachable Exists NoExecute 0xc002083490}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:06:40 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:11 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:06:40 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-17 13:06:40 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-17 13:07:06 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://8284fb19f733cd8a1c412d26cf0f268ba85cadce40722073905e141493dd58ac}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 13:07:18.194: INFO: Pod "nginx-deployment-7b8c6f4498-v77gk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-v77gk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8536,SelfLink:/api/v1/namespaces/deployment-8536/pods/nginx-deployment-7b8c6f4498-v77gk,UID:fece6664-6f26-4bdd-8cd3-3d049e5c50f4,ResourceVersion:24696303,Generation:0,CreationTimestamp:2020-02-17 13:07:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dae76a75-f865-41a8-9168-ca8add101e06 0xc0020835f7 0xc0020835f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpb2z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpb2z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jpb2z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002083680} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020836e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:16 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 13:07:18.194: INFO: Pod "nginx-deployment-7b8c6f4498-vxrb5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vxrb5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8536,SelfLink:/api/v1/namespaces/deployment-8536/pods/nginx-deployment-7b8c6f4498-vxrb5,UID:532316f2-0483-4112-805d-11f0aa5997bd,ResourceVersion:24696279,Generation:0,CreationTimestamp:2020-02-17 13:07:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dae76a75-f865-41a8-9168-ca8add101e06 0xc002083787 0xc002083788}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpb2z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpb2z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jpb2z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002083890} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020838b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:16 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 13:07:18.194: INFO: Pod "nginx-deployment-7b8c6f4498-wvrnx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wvrnx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8536,SelfLink:/api/v1/namespaces/deployment-8536/pods/nginx-deployment-7b8c6f4498-wvrnx,UID:807675ae-1d49-40c1-a66a-c81dbbcfeba2,ResourceVersion:24696264,Generation:0,CreationTimestamp:2020-02-17 13:07:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dae76a75-f865-41a8-9168-ca8add101e06 0xc0020839c7 0xc0020839c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpb2z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpb2z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jpb2z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002083a80} {node.kubernetes.io/unreachable Exists NoExecute 0xc002083aa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:16 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 13:07:18.195: INFO: Pod "nginx-deployment-7b8c6f4498-zpjrv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zpjrv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8536,SelfLink:/api/v1/namespaces/deployment-8536/pods/nginx-deployment-7b8c6f4498-zpjrv,UID:a4c6a566-635a-4841-a517-be0dd52fba81,ResourceVersion:24696278,Generation:0,CreationTimestamp:2020-02-17 13:07:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dae76a75-f865-41a8-9168-ca8add101e06 0xc002083b27 0xc002083b28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpb2z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpb2z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jpb2z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002083b90} {node.kubernetes.io/unreachable Exists NoExecute 0xc002083bb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:07:16 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 13:07:18.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8536" for this suite. Feb 17 13:08:17.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 13:08:17.779: INFO: namespace deployment-8536 deletion completed in 57.377528923s • [SLOW TEST:97.877 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 13:08:17.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-05c08b4f-d987-4b7a-85f8-5ead5e7af4b2 STEP: Creating a pod to test consume secrets Feb 17 13:08:18.445: INFO: Waiting up to 5m0s for pod "pod-secrets-76f514e5-9fd4-49dc-a150-072a8b19d1b5" in namespace "secrets-9460" to be "success or failure" Feb 17 13:08:18.476: INFO: Pod "pod-secrets-76f514e5-9fd4-49dc-a150-072a8b19d1b5": Phase="Pending", Reason="", readiness=false. Elapsed: 30.714283ms Feb 17 13:08:20.488: INFO: Pod "pod-secrets-76f514e5-9fd4-49dc-a150-072a8b19d1b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042053429s Feb 17 13:08:22.508: INFO: Pod "pod-secrets-76f514e5-9fd4-49dc-a150-072a8b19d1b5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062471412s Feb 17 13:08:24.531: INFO: Pod "pod-secrets-76f514e5-9fd4-49dc-a150-072a8b19d1b5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085497147s Feb 17 13:08:26.541: INFO: Pod "pod-secrets-76f514e5-9fd4-49dc-a150-072a8b19d1b5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.095512132s Feb 17 13:08:28.550: INFO: Pod "pod-secrets-76f514e5-9fd4-49dc-a150-072a8b19d1b5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.104791187s Feb 17 13:08:30.560: INFO: Pod "pod-secrets-76f514e5-9fd4-49dc-a150-072a8b19d1b5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.114820925s Feb 17 13:08:32.573: INFO: Pod "pod-secrets-76f514e5-9fd4-49dc-a150-072a8b19d1b5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.127554416s Feb 17 13:08:34.585: INFO: Pod "pod-secrets-76f514e5-9fd4-49dc-a150-072a8b19d1b5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.13966604s Feb 17 13:08:36.598: INFO: Pod "pod-secrets-76f514e5-9fd4-49dc-a150-072a8b19d1b5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.152116617s Feb 17 13:08:38.625: INFO: Pod "pod-secrets-76f514e5-9fd4-49dc-a150-072a8b19d1b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.17991166s STEP: Saw pod success Feb 17 13:08:38.626: INFO: Pod "pod-secrets-76f514e5-9fd4-49dc-a150-072a8b19d1b5" satisfied condition "success or failure" Feb 17 13:08:38.630: INFO: Trying to get logs from node iruya-node pod pod-secrets-76f514e5-9fd4-49dc-a150-072a8b19d1b5 container secret-volume-test: STEP: delete the pod Feb 17 13:08:38.734: INFO: Waiting for pod pod-secrets-76f514e5-9fd4-49dc-a150-072a8b19d1b5 to disappear Feb 17 13:08:38.763: INFO: Pod pod-secrets-76f514e5-9fd4-49dc-a150-072a8b19d1b5 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 13:08:38.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9460" for this suite. Feb 17 13:08:44.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 13:08:44.952: INFO: namespace secrets-9460 deletion completed in 6.176788522s • [SLOW TEST:27.173 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 13:08:44.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-9de348ed-2e75-4b5a-b5b3-1a51538a8912 STEP: Creating a pod to test consume secrets Feb 17 13:08:45.047: INFO: Waiting up to 5m0s for pod "pod-secrets-ecc6232a-22bd-4588-b751-7f81e6ac98a0" in namespace "secrets-3686" to be "success or failure" Feb 17 13:08:45.065: INFO: Pod "pod-secrets-ecc6232a-22bd-4588-b751-7f81e6ac98a0": Phase="Pending", Reason="", readiness=false. Elapsed: 17.849613ms Feb 17 13:08:47.072: INFO: Pod "pod-secrets-ecc6232a-22bd-4588-b751-7f81e6ac98a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025273566s Feb 17 13:08:49.165: INFO: Pod "pod-secrets-ecc6232a-22bd-4588-b751-7f81e6ac98a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117958361s Feb 17 13:08:51.175: INFO: Pod "pod-secrets-ecc6232a-22bd-4588-b751-7f81e6ac98a0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.12781261s Feb 17 13:08:53.189: INFO: Pod "pod-secrets-ecc6232a-22bd-4588-b751-7f81e6ac98a0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.142474345s Feb 17 13:08:55.197: INFO: Pod "pod-secrets-ecc6232a-22bd-4588-b751-7f81e6ac98a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.150685408s STEP: Saw pod success Feb 17 13:08:55.198: INFO: Pod "pod-secrets-ecc6232a-22bd-4588-b751-7f81e6ac98a0" satisfied condition "success or failure" Feb 17 13:08:55.202: INFO: Trying to get logs from node iruya-node pod pod-secrets-ecc6232a-22bd-4588-b751-7f81e6ac98a0 container secret-volume-test: STEP: delete the pod Feb 17 13:08:55.369: INFO: Waiting for pod pod-secrets-ecc6232a-22bd-4588-b751-7f81e6ac98a0 to disappear Feb 17 13:08:55.376: INFO: Pod pod-secrets-ecc6232a-22bd-4588-b751-7f81e6ac98a0 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 13:08:55.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3686" for this suite. Feb 17 13:09:01.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 13:09:01.523: INFO: namespace secrets-3686 deletion completed in 6.141487961s • [SLOW TEST:16.569 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 13:09:01.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Feb 17 13:09:01.687: INFO: Waiting up to 5m0s for pod "pod-88479ce1-6953-4ec3-852b-1bc945a356b6" in namespace "emptydir-1231" to be "success or failure" Feb 17 13:09:01.701: INFO: Pod "pod-88479ce1-6953-4ec3-852b-1bc945a356b6": Phase="Pending", Reason="", readiness=false. Elapsed: 13.683845ms Feb 17 13:09:03.709: INFO: Pod "pod-88479ce1-6953-4ec3-852b-1bc945a356b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021776111s Feb 17 13:09:05.730: INFO: Pod "pod-88479ce1-6953-4ec3-852b-1bc945a356b6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042673357s Feb 17 13:09:07.740: INFO: Pod "pod-88479ce1-6953-4ec3-852b-1bc945a356b6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052620761s Feb 17 13:09:09.754: INFO: Pod "pod-88479ce1-6953-4ec3-852b-1bc945a356b6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067306629s Feb 17 13:09:11.761: INFO: Pod "pod-88479ce1-6953-4ec3-852b-1bc945a356b6": Phase="Running", Reason="", readiness=true. Elapsed: 10.073494632s Feb 17 13:09:13.772: INFO: Pod "pod-88479ce1-6953-4ec3-852b-1bc945a356b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.084935703s STEP: Saw pod success Feb 17 13:09:13.772: INFO: Pod "pod-88479ce1-6953-4ec3-852b-1bc945a356b6" satisfied condition "success or failure" Feb 17 13:09:13.786: INFO: Trying to get logs from node iruya-node pod pod-88479ce1-6953-4ec3-852b-1bc945a356b6 container test-container: STEP: delete the pod Feb 17 13:09:13.909: INFO: Waiting for pod pod-88479ce1-6953-4ec3-852b-1bc945a356b6 to disappear Feb 17 13:09:13.919: INFO: Pod pod-88479ce1-6953-4ec3-852b-1bc945a356b6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 13:09:13.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1231" for this suite. Feb 17 13:09:20.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 13:09:20.148: INFO: namespace emptydir-1231 deletion completed in 6.216756733s • [SLOW TEST:18.626 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 13:09:20.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Feb 17 13:09:20.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-429' Feb 17 13:09:20.509: INFO: stderr: "" Feb 17 13:09:20.509: INFO: stdout: "pod/pause created\n" Feb 17 13:09:20.509: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Feb 17 13:09:20.509: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-429" to be "running and ready" Feb 17 13:09:21.104: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 594.490517ms Feb 17 13:09:23.115: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.605487528s Feb 17 13:09:25.122: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.613428764s Feb 17 13:09:27.131: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.62226178s Feb 17 13:09:29.139: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.63007364s Feb 17 13:09:32.786: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 12.277284251s Feb 17 13:09:34.794: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 14.285177227s Feb 17 13:09:36.802: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 16.29322591s Feb 17 13:09:38.812: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 18.303435135s Feb 17 13:09:38.813: INFO: Pod "pause" satisfied condition "running and ready" Feb 17 13:09:38.813: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Feb 17 13:09:38.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-429' Feb 17 13:09:38.956: INFO: stderr: "" Feb 17 13:09:38.956: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Feb 17 13:09:38.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-429' Feb 17 13:09:39.039: INFO: stderr: "" Feb 17 13:09:39.039: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 19s testing-label-value\n" STEP: removing the label testing-label of a pod Feb 17 13:09:39.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-429' Feb 17 13:09:39.136: INFO: stderr: "" Feb 17 13:09:39.137: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Feb 17 13:09:39.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-429' Feb 17 13:09:39.209: INFO: stderr: "" Feb 17 13:09:39.209: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 19s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Feb 17 13:09:39.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-429' Feb 17 13:09:39.335: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 17 13:09:39.336: INFO: stdout: "pod \"pause\" force deleted\n" Feb 17 13:09:39.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-429' Feb 17 13:09:39.498: INFO: stderr: "No resources found.\n" Feb 17 13:09:39.499: INFO: stdout: "" Feb 17 13:09:39.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-429 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 17 13:09:39.585: INFO: stderr: "" Feb 17 13:09:39.586: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 13:09:39.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-429" for this suite. Feb 17 13:09:45.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 13:09:45.778: INFO: namespace kubectl-429 deletion completed in 6.171603338s • [SLOW TEST:25.629 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 13:09:45.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Feb 17 13:09:45.923: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 13:09:46.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2526" for this suite. Feb 17 13:09:52.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 13:09:52.189: INFO: namespace kubectl-2526 deletion completed in 6.148422068s • [SLOW TEST:6.410 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 13:09:52.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Feb 17 13:09:52.402: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4295,SelfLink:/api/v1/namespaces/watch-4295/configmaps/e2e-watch-test-label-changed,UID:f6a0b629-6d56-4a83-9e63-fd42557ebfcc,ResourceVersion:24696799,Generation:0,CreationTimestamp:2020-02-17 13:09:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 17 13:09:52.403: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4295,SelfLink:/api/v1/namespaces/watch-4295/configmaps/e2e-watch-test-label-changed,UID:f6a0b629-6d56-4a83-9e63-fd42557ebfcc,ResourceVersion:24696800,Generation:0,CreationTimestamp:2020-02-17 13:09:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Feb 17 13:09:52.403: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4295,SelfLink:/api/v1/namespaces/watch-4295/configmaps/e2e-watch-test-label-changed,UID:f6a0b629-6d56-4a83-9e63-fd42557ebfcc,ResourceVersion:24696801,Generation:0,CreationTimestamp:2020-02-17 13:09:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Feb 17 13:10:02.594: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4295,SelfLink:/api/v1/namespaces/watch-4295/configmaps/e2e-watch-test-label-changed,UID:f6a0b629-6d56-4a83-9e63-fd42557ebfcc,ResourceVersion:24696817,Generation:0,CreationTimestamp:2020-02-17 13:09:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 17 13:10:02.594: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4295,SelfLink:/api/v1/namespaces/watch-4295/configmaps/e2e-watch-test-label-changed,UID:f6a0b629-6d56-4a83-9e63-fd42557ebfcc,ResourceVersion:24696818,Generation:0,CreationTimestamp:2020-02-17 13:09:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Feb 17 13:10:02.595: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4295,SelfLink:/api/v1/namespaces/watch-4295/configmaps/e2e-watch-test-label-changed,UID:f6a0b629-6d56-4a83-9e63-fd42557ebfcc,ResourceVersion:24696819,Generation:0,CreationTimestamp:2020-02-17 13:09:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 13:10:02.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4295" for this suite. Feb 17 13:10:08.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 13:10:08.749: INFO: namespace watch-4295 deletion completed in 6.149055972s • [SLOW TEST:16.560 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 13:10:08.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-8468 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-8468 STEP: Creating statefulset with conflicting port in namespace statefulset-8468 STEP: Waiting until pod test-pod will start running in namespace statefulset-8468 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-8468 Feb 17 13:10:23.020: INFO: Observed stateful pod in namespace: statefulset-8468, name: ss-0, uid: 8ff8494a-5eaa-4f0c-a85b-d8475096f19f, status phase: Pending. Waiting for statefulset controller to delete. Feb 17 13:15:23.022: INFO: Pod ss-0 expected to be re-created at least once [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Feb 17 13:15:23.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe po ss-0 --namespace=statefulset-8468' Feb 17 13:15:23.214: INFO: stderr: "" Feb 17 13:15:23.214: INFO: stdout: "Name: ss-0\nNamespace: statefulset-8468\nPriority: 0\nNode: iruya-node/\nLabels: baz=blah\n controller-revision-hash=ss-6f98bdb9c4\n foo=bar\n statefulset.kubernetes.io/pod-name=ss-0\nAnnotations: \nStatus: Pending\nIP: \nControlled By: StatefulSet/ss\nContainers:\n nginx:\n Image: docker.io/library/nginx:1.14-alpine\n Port: 21017/TCP\n Host Port: 21017/TCP\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-n844x (ro)\nVolumes:\n default-token-n844x:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-n844x\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Warning PodFitsHostPorts 5m6s kubelet, iruya-node Predicate PodFitsHostPorts failed\n" Feb 17 13:15:23.215: INFO: Output of kubectl describe ss-0: Name: ss-0 Namespace: statefulset-8468 Priority: 0 Node: iruya-node/ Labels: baz=blah controller-revision-hash=ss-6f98bdb9c4 foo=bar statefulset.kubernetes.io/pod-name=ss-0 Annotations: Status: Pending IP: Controlled By: StatefulSet/ss Containers: nginx: Image: docker.io/library/nginx:1.14-alpine Port: 21017/TCP Host Port: 21017/TCP Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-n844x (ro) Volumes: default-token-n844x: Type: Secret (a volume populated by a Secret) SecretName: default-token-n844x Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning PodFitsHostPorts 5m6s kubelet, iruya-node Predicate PodFitsHostPorts failed Feb 17 13:15:23.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs ss-0 --namespace=statefulset-8468 --tail=100' Feb 17 13:15:23.342: INFO: rc: 1 Feb 17 13:15:23.342: INFO: Last 100 log lines of ss-0: Feb 17 13:15:23.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe po test-pod --namespace=statefulset-8468' Feb 17 13:15:23.457: INFO: stderr: "" Feb 17 13:15:23.457: INFO: stdout: "Name: test-pod\nNamespace: statefulset-8468\nPriority: 0\nNode: iruya-node/10.96.3.65\nStart Time: Mon, 17 Feb 2020 13:10:09 +0000\nLabels: \nAnnotations: \nStatus: Running\nIP: 10.44.0.1\nContainers:\n nginx:\n Container ID: docker://ad2ff51343e7e12079f66ff5092c97a88d84341e1473ee22240c8476ef6029d5\n Image: docker.io/library/nginx:1.14-alpine\n Image ID: docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\n Port: 21017/TCP\n Host Port: 21017/TCP\n State: Running\n Started: Mon, 17 Feb 2020 13:10:21 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-n844x (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-n844x:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-n844x\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Pulled 5m6s kubelet, iruya-node Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\n Normal Created 5m2s kubelet, iruya-node Created container nginx\n Normal Started 5m2s kubelet, iruya-node Started container nginx\n" Feb 17 13:15:23.457: INFO: Output of kubectl describe test-pod: Name: test-pod Namespace: statefulset-8468 Priority: 0 Node: iruya-node/10.96.3.65 Start Time: Mon, 17 Feb 2020 13:10:09 +0000 Labels: Annotations: Status: Running IP: 10.44.0.1 Containers: nginx: Container ID: docker://ad2ff51343e7e12079f66ff5092c97a88d84341e1473ee22240c8476ef6029d5 Image: docker.io/library/nginx:1.14-alpine Image ID: docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 Port: 21017/TCP Host Port: 21017/TCP State: Running Started: Mon, 17 Feb 2020 13:10:21 +0000 Ready: True Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-n844x (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-n844x: Type: Secret (a volume populated by a Secret) SecretName: default-token-n844x Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Pulled 5m6s kubelet, iruya-node Container image "docker.io/library/nginx:1.14-alpine" already present on machine Normal Created 5m2s kubelet, iruya-node Created container nginx Normal Started 5m2s kubelet, iruya-node Started container nginx Feb 17 13:15:23.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs test-pod --namespace=statefulset-8468 --tail=100' Feb 17 13:15:23.555: INFO: stderr: "" Feb 17 13:15:23.555: INFO: stdout: "" Feb 17 13:15:23.556: INFO: Last 100 log lines of test-pod: Feb 17 13:15:23.556: INFO: Deleting all statefulset in ns statefulset-8468 Feb 17 13:15:23.559: INFO: Scaling statefulset ss to 0 Feb 17 13:15:33.584: INFO: Waiting for statefulset status.replicas updated to 0 Feb 17 13:15:33.588: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Collecting events from namespace "statefulset-8468". STEP: Found 16 events. Feb 17 13:15:33.648: INFO: At 2020-02-17 13:10:09 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-0 in StatefulSet ss successful Feb 17 13:15:33.649: INFO: At 2020-02-17 13:10:09 +0000 UTC - event for ss: {statefulset-controller } RecreatingFailedPod: StatefulSet statefulset-8468/ss is recreating failed Pod ss-0 Feb 17 13:15:33.649: INFO: At 2020-02-17 13:10:09 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-0 in StatefulSet ss successful Feb 17 13:15:33.649: INFO: At 2020-02-17 13:10:09 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed Feb 17 13:15:33.649: INFO: At 2020-02-17 13:10:09 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed Feb 17 13:15:33.649: INFO: At 2020-02-17 13:10:11 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed Feb 17 13:15:33.649: INFO: At 2020-02-17 13:10:12 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: The POST operation against Pod could not be completed at this time, please try again. Feb 17 13:15:33.649: INFO: At 2020-02-17 13:10:12 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed Feb 17 13:15:33.649: INFO: At 2020-02-17 13:10:13 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed Feb 17 13:15:33.649: INFO: At 2020-02-17 13:10:14 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed Feb 17 13:15:33.649: INFO: At 2020-02-17 13:10:14 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed Feb 17 13:15:33.649: INFO: At 2020-02-17 13:10:16 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed Feb 17 13:15:33.649: INFO: At 2020-02-17 13:10:17 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed Feb 17 13:15:33.649: INFO: At 2020-02-17 13:10:17 +0000 UTC - event for test-pod: {kubelet iruya-node} Pulled: Container image "docker.io/library/nginx:1.14-alpine" already present on machine Feb 17 13:15:33.649: INFO: At 2020-02-17 13:10:21 +0000 UTC - event for test-pod: {kubelet iruya-node} Created: Created container nginx Feb 17 13:15:33.649: INFO: At 2020-02-17 13:10:21 +0000 UTC - event for test-pod: {kubelet iruya-node} Started: Started container nginx Feb 17 13:15:33.653: INFO: POD NODE PHASE GRACE CONDITIONS Feb 17 13:15:33.653: INFO: test-pod iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:10:09 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:10:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:10:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:10:09 +0000 UTC }] Feb 17 13:15:33.653: INFO: Feb 17 13:15:33.664: INFO: Logging node info for node iruya-node Feb 17 13:15:33.668: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:iruya-node,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/iruya-node,UID:b2aa273d-23ea-4c86-9e2f-72569e3392bd,ResourceVersion:24697301,Generation:0,CreationTimestamp:2019-08-04 09:01:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: iruya-node,kubernetes.io/os: linux,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.96.1.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-10-12 11:56:49 +0000 UTC 2019-10-12 11:56:49 +0000 UTC WeaveIsUp Weave pod has set this} {MemoryPressure False 2020-02-17 13:14:58 +0000 UTC 2019-08-04 09:01:39 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-02-17 13:14:58 +0000 UTC 2019-08-04 09:01:39 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-02-17 13:14:58 +0000 UTC 2019-08-04 09:01:39 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-02-17 13:14:58 +0000 UTC 2019-08-04 09:02:19 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.96.3.65} {Hostname iruya-node}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f573dcf04d6f4a87856a35d266a2fa7a,SystemUUID:F573DCF0-4D6F-4A87-856A-35D266A2FA7A,BootID:8baf4beb-8391-43e6-b17b-b1e184b5370a,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.15.1,KubeProxyVersion:v1.15.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 373099368} {[k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7 k8s.gcr.io/etcd:3.3.10] 258116302} {[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15] 246640776} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 195659796} {[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2] 148150868} {[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine] 126894770} {[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine] 123781643} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 98945667} {[k8s.gcr.io/kube-proxy@sha256:08186f4897488e96cb098dd8d1d931af9a6ea718bb8737bf44bb76e42075f0ce k8s.gcr.io/kube-proxy:v1.15.1] 82408284} {[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10] 61365829} {[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6] 57345321} {[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2] 49569458} {[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine] 29331594} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 27413498} {[aquasec/kube-bench@sha256:33d50ec2fdc6644ffa70b088af1a9932f16d6bb9391a9f73045c8c6b4f73f4e4 aquasec/kube-bench:latest] 21536876} {[nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 nginx:1.15-alpine] 16087791} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0] 11443478} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 9349974} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0] 6757579} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:71c3fc838e0637df570497febafa0ee73bf47176dfd43612de5c55a71230674e gcr.io/kubernetes-e2e-test-images/liveness:1.1] 5829944} {[appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 appropriate/curl:latest] 5496756} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 4681408} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e busybox:latest] 1219782} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472} {[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest] 239840}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} Feb 17 13:15:33.669: INFO: Logging kubelet events for node iruya-node Feb 17 13:15:33.673: INFO: Logging pods the kubelet thinks is on node iruya-node Feb 17 13:15:33.689: INFO: kube-bench-j7kcs started at 2020-02-11 06:42:30 +0000 UTC (0+1 container statuses recorded) Feb 17 13:15:33.689: INFO: Container kube-bench ready: false, restart count 0 Feb 17 13:15:33.689: INFO: kube-proxy-976zl started at 2019-08-04 09:01:39 +0000 UTC (0+1 container statuses recorded) Feb 17 13:15:33.689: INFO: Container kube-proxy ready: true, restart count 0 Feb 17 13:15:33.689: INFO: test-pod started at 2020-02-17 13:10:09 +0000 UTC (0+1 container statuses recorded) Feb 17 13:15:33.689: INFO: Container nginx ready: true, restart count 0 Feb 17 13:15:33.689: INFO: weave-net-rlp57 started at 2019-10-12 11:56:39 +0000 UTC (0+2 container statuses recorded) Feb 17 13:15:33.689: INFO: Container weave ready: true, restart count 0 Feb 17 13:15:33.689: INFO: Container weave-npc ready: true, restart count 0 W0217 13:15:33.694393 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 17 13:15:33.791: INFO: Latency metrics for node iruya-node Feb 17 13:15:33.791: INFO: Logging node info for node iruya-server-sfge57q7djm7 Feb 17 13:15:33.798: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:iruya-server-sfge57q7djm7,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/iruya-server-sfge57q7djm7,UID:67f2a658-4743-4118-95e7-463a23bcd212,ResourceVersion:24697349,Generation:0,CreationTimestamp:2019-08-04 08:52:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: iruya-server-sfge57q7djm7,kubernetes.io/os: linux,node-role.kubernetes.io/master: ,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.96.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-08-04 08:53:00 +0000 UTC 2019-08-04 08:53:00 +0000 UTC WeaveIsUp Weave pod has set this} {MemoryPressure False 2020-02-17 13:15:29 +0000 UTC 2019-08-04 08:52:04 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-02-17 13:15:29 +0000 UTC 2019-08-04 08:52:04 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-02-17 13:15:29 +0000 UTC 2019-08-04 08:52:04 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-02-17 13:15:29 +0000 UTC 2019-08-04 08:53:09 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.96.2.216} {Hostname iruya-server-sfge57q7djm7}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:78bacef342604a51913cae58dd95802b,SystemUUID:78BACEF3-4260-4A51-913C-AE58DD95802B,BootID:db143d3a-01b3-4483-b23e-e72adff2b28d,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.15.1,KubeProxyVersion:v1.15.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 373099368} {[k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7 k8s.gcr.io/etcd:3.3.10] 258116302} {[k8s.gcr.io/kube-apiserver@sha256:304a1c38707834062ee87df62ef329d52a8b9a3e70459565d0a396479073f54c k8s.gcr.io/kube-apiserver:v1.15.1] 206827454} {[k8s.gcr.io/kube-controller-manager@sha256:9abae95e428e228fe8f6d1630d55e79e018037460f3731312805c0f37471e4bf k8s.gcr.io/kube-controller-manager:v1.15.1] 158722622} {[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2] 148150868} {[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine] 126894770} {[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine] 123781643} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 98945667} {[k8s.gcr.io/kube-proxy@sha256:08186f4897488e96cb098dd8d1d931af9a6ea718bb8737bf44bb76e42075f0ce k8s.gcr.io/kube-proxy:v1.15.1] 82408284} {[k8s.gcr.io/kube-scheduler@sha256:d0ee18a9593013fbc44b1920e4930f29b664b59a3958749763cb33b57e0e8956 k8s.gcr.io/kube-scheduler:v1.15.1] 81107582} {[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6] 57345321} {[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2] 49569458} {[k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4 k8s.gcr.io/coredns:1.3.1] 40303560} {[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine] 29331594} {[nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 nginx:1.15-alpine] 16087791} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472} {[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest] 239840}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} Feb 17 13:15:33.799: INFO: Logging kubelet events for node iruya-server-sfge57q7djm7 Feb 17 13:15:33.813: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 Feb 17 13:15:33.836: INFO: etcd-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:38 +0000 UTC (0+1 container statuses recorded) Feb 17 13:15:33.836: INFO: Container etcd ready: true, restart count 0 Feb 17 13:15:33.836: INFO: weave-net-bzl4d started at 2019-08-04 08:52:37 +0000 UTC (0+2 container statuses recorded) Feb 17 13:15:33.836: INFO: Container weave ready: true, restart count 0 Feb 17 13:15:33.836: INFO: Container weave-npc ready: true, restart count 0 Feb 17 13:15:33.836: INFO: coredns-5c98db65d4-bm4gs started at 2019-08-04 08:53:12 +0000 UTC (0+1 container statuses recorded) Feb 17 13:15:33.836: INFO: Container coredns ready: true, restart count 0 Feb 17 13:15:33.836: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:42 +0000 UTC (0+1 container statuses recorded) Feb 17 13:15:33.836: INFO: Container kube-controller-manager ready: true, restart count 23 Feb 17 13:15:33.836: INFO: kube-proxy-58v95 started at 2019-08-04 08:52:37 +0000 UTC (0+1 container statuses recorded) Feb 17 13:15:33.836: INFO: Container kube-proxy ready: true, restart count 0 Feb 17 13:15:33.836: INFO: kube-apiserver-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:39 +0000 UTC (0+1 container statuses recorded) Feb 17 13:15:33.836: INFO: Container kube-apiserver ready: true, restart count 0 Feb 17 13:15:33.836: INFO: kube-scheduler-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:43 +0000 UTC (0+1 container statuses recorded) Feb 17 13:15:33.836: INFO: Container kube-scheduler ready: true, restart count 15 Feb 17 13:15:33.836: INFO: coredns-5c98db65d4-xx8w8 started at 2019-08-04 08:53:12 +0000 UTC (0+1 container statuses recorded) Feb 17 13:15:33.836: INFO: Container coredns ready: true, restart count 0 W0217 13:15:33.870610 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 17 13:15:33.928: INFO: Latency metrics for node iruya-server-sfge57q7djm7 Feb 17 13:15:33.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8468" for this suite. Feb 17 13:15:55.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 13:15:56.105: INFO: namespace statefulset-8468 deletion completed in 22.167189852s • Failure [347.355 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 17 13:15:23.022: Pod ss-0 expected to be re-created at least once /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:769 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 13:15:56.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Feb 17 13:15:56.238: INFO: Waiting up to 5m0s for pod "downward-api-ce1124fb-6321-426f-a745-afd35c3ce017" in namespace "downward-api-4225" to be "success or failure" Feb 17 13:15:56.253: INFO: Pod "downward-api-ce1124fb-6321-426f-a745-afd35c3ce017": Phase="Pending", Reason="", readiness=false. Elapsed: 14.691287ms Feb 17 13:15:58.260: INFO: Pod "downward-api-ce1124fb-6321-426f-a745-afd35c3ce017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022052667s Feb 17 13:16:00.270: INFO: Pod "downward-api-ce1124fb-6321-426f-a745-afd35c3ce017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0318961s Feb 17 13:16:02.282: INFO: Pod "downward-api-ce1124fb-6321-426f-a745-afd35c3ce017": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044088733s Feb 17 13:16:04.295: INFO: Pod "downward-api-ce1124fb-6321-426f-a745-afd35c3ce017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05696691s STEP: Saw pod success Feb 17 13:16:04.295: INFO: Pod "downward-api-ce1124fb-6321-426f-a745-afd35c3ce017" satisfied condition "success or failure" Feb 17 13:16:04.301: INFO: Trying to get logs from node iruya-node pod downward-api-ce1124fb-6321-426f-a745-afd35c3ce017 container dapi-container: STEP: delete the pod Feb 17 13:16:04.369: INFO: Waiting for pod downward-api-ce1124fb-6321-426f-a745-afd35c3ce017 to disappear Feb 17 13:16:04.375: INFO: Pod downward-api-ce1124fb-6321-426f-a745-afd35c3ce017 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 13:16:04.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4225" for this suite. Feb 17 13:16:10.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 13:16:10.535: INFO: namespace downward-api-4225 deletion completed in 6.155304907s • [SLOW TEST:14.429 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 13:16:10.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-069acaca-9533-4a41-a9e4-305c3ec0d377 STEP: Creating configMap with name cm-test-opt-upd-c4e0619a-d79d-4a95-a7af-734a27766823 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-069acaca-9533-4a41-a9e4-305c3ec0d377 STEP: Updating configmap cm-test-opt-upd-c4e0619a-d79d-4a95-a7af-734a27766823 STEP: Creating configMap with name cm-test-opt-create-50507a8c-b24c-436d-8aef-05d5fd9fe9f1 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 13:16:28.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7064" for this suite. Feb 17 13:16:50.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 13:16:51.055: INFO: namespace projected-7064 deletion completed in 22.131134654s • [SLOW TEST:40.520 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 13:16:51.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-8c0645ca-9cce-4725-aa87-8a0a6e49229a STEP: Creating secret with name s-test-opt-upd-d0b65f67-3dac-4a5f-856a-dc4d2d864553 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-8c0645ca-9cce-4725-aa87-8a0a6e49229a STEP: Updating secret s-test-opt-upd-d0b65f67-3dac-4a5f-856a-dc4d2d864553 STEP: Creating secret with name s-test-opt-create-00a6b0d2-8723-4664-a713-a6760d4feeb2 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 13:18:29.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3755" for this suite. Feb 17 13:18:51.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 13:18:51.266: INFO: namespace secrets-3755 deletion completed in 22.214441849s • [SLOW TEST:120.211 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 13:18:51.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-71626253-256a-4dc1-b0f2-3208f45a9d99 STEP: Creating a pod to test consume configMaps Feb 17 13:18:51.496: INFO: Waiting up to 5m0s for pod "pod-configmaps-d99a1adf-c086-4137-a6e6-252185382ac1" in namespace "configmap-386" to be "success or failure" Feb 17 13:18:51.510: INFO: Pod "pod-configmaps-d99a1adf-c086-4137-a6e6-252185382ac1": Phase="Pending", Reason="", readiness=false. Elapsed: 13.515454ms Feb 17 13:18:53.521: INFO: Pod "pod-configmaps-d99a1adf-c086-4137-a6e6-252185382ac1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024692369s Feb 17 13:18:55.533: INFO: Pod "pod-configmaps-d99a1adf-c086-4137-a6e6-252185382ac1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037373864s Feb 17 13:18:57.553: INFO: Pod "pod-configmaps-d99a1adf-c086-4137-a6e6-252185382ac1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056849214s Feb 17 13:18:59.559: INFO: Pod "pod-configmaps-d99a1adf-c086-4137-a6e6-252185382ac1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063459043s Feb 17 13:19:01.568: INFO: Pod "pod-configmaps-d99a1adf-c086-4137-a6e6-252185382ac1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.071666291s STEP: Saw pod success Feb 17 13:19:01.568: INFO: Pod "pod-configmaps-d99a1adf-c086-4137-a6e6-252185382ac1" satisfied condition "success or failure" Feb 17 13:19:01.573: INFO: Trying to get logs from node iruya-node pod pod-configmaps-d99a1adf-c086-4137-a6e6-252185382ac1 container configmap-volume-test: STEP: delete the pod Feb 17 13:19:01.696: INFO: Waiting for pod pod-configmaps-d99a1adf-c086-4137-a6e6-252185382ac1 to disappear Feb 17 13:19:01.703: INFO: Pod pod-configmaps-d99a1adf-c086-4137-a6e6-252185382ac1 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 13:19:01.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-386" for this suite. Feb 17 13:19:07.735: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 13:19:07.934: INFO: namespace configmap-386 deletion completed in 6.221039943s • [SLOW TEST:16.666 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 13:19:07.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 17 13:19:08.063: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b23588e5-e7e4-4a74-a5d4-d8e76cdfb720" in namespace "downward-api-4778" to be "success or failure" Feb 17 13:19:08.073: INFO: Pod "downwardapi-volume-b23588e5-e7e4-4a74-a5d4-d8e76cdfb720": Phase="Pending", Reason="", readiness=false. Elapsed: 10.063989ms Feb 17 13:19:10.079: INFO: Pod "downwardapi-volume-b23588e5-e7e4-4a74-a5d4-d8e76cdfb720": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016043846s Feb 17 13:19:12.142: INFO: Pod "downwardapi-volume-b23588e5-e7e4-4a74-a5d4-d8e76cdfb720": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079209667s Feb 17 13:19:14.151: INFO: Pod "downwardapi-volume-b23588e5-e7e4-4a74-a5d4-d8e76cdfb720": Phase="Pending", Reason="", readiness=false. Elapsed: 6.087842128s Feb 17 13:19:16.159: INFO: Pod "downwardapi-volume-b23588e5-e7e4-4a74-a5d4-d8e76cdfb720": Phase="Pending", Reason="", readiness=false. Elapsed: 8.096121686s Feb 17 13:19:18.165: INFO: Pod "downwardapi-volume-b23588e5-e7e4-4a74-a5d4-d8e76cdfb720": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.102106989s STEP: Saw pod success Feb 17 13:19:18.165: INFO: Pod "downwardapi-volume-b23588e5-e7e4-4a74-a5d4-d8e76cdfb720" satisfied condition "success or failure" Feb 17 13:19:18.169: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-b23588e5-e7e4-4a74-a5d4-d8e76cdfb720 container client-container: STEP: delete the pod Feb 17 13:19:18.250: INFO: Waiting for pod downwardapi-volume-b23588e5-e7e4-4a74-a5d4-d8e76cdfb720 to disappear Feb 17 13:19:18.351: INFO: Pod downwardapi-volume-b23588e5-e7e4-4a74-a5d4-d8e76cdfb720 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 13:19:18.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4778" for this suite. Feb 17 13:19:24.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 13:19:24.510: INFO: namespace downward-api-4778 deletion completed in 6.152418396s • [SLOW TEST:16.575 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 13:19:24.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-7735 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-7735 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7735 Feb 17 13:19:24.655: INFO: Found 0 stateful pods, waiting for 1 Feb 17 13:19:34.668: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Feb 17 13:19:34.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7735 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 17 13:19:37.329: INFO: stderr: "I0217 13:19:36.952107 887 log.go:172] (0xc00089eb00) (0xc0007670e0) Create stream\nI0217 13:19:36.952162 887 log.go:172] (0xc00089eb00) (0xc0007670e0) Stream added, broadcasting: 1\nI0217 13:19:36.964673 887 log.go:172] (0xc00089eb00) Reply frame received for 1\nI0217 13:19:36.964768 887 log.go:172] (0xc00089eb00) (0xc000349ae0) Create stream\nI0217 13:19:36.964781 887 log.go:172] (0xc00089eb00) (0xc000349ae0) Stream added, broadcasting: 3\nI0217 13:19:36.967608 887 log.go:172] (0xc00089eb00) Reply frame received for 3\nI0217 13:19:36.967688 887 log.go:172] (0xc00089eb00) (0xc00076c0a0) Create stream\nI0217 13:19:36.967744 887 log.go:172] (0xc00089eb00) (0xc00076c0a0) Stream added, broadcasting: 5\nI0217 13:19:36.970871 887 log.go:172] (0xc00089eb00) Reply frame received for 5\nI0217 13:19:37.106006 887 log.go:172] (0xc00089eb00) Data frame received for 5\nI0217 13:19:37.106039 887 log.go:172] (0xc00076c0a0) (5) Data frame handling\nI0217 13:19:37.106053 887 log.go:172] (0xc00076c0a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0217 13:19:37.161454 887 log.go:172] (0xc00089eb00) Data frame received for 3\nI0217 13:19:37.161475 887 log.go:172] (0xc000349ae0) (3) Data frame handling\nI0217 13:19:37.161481 887 log.go:172] (0xc000349ae0) (3) Data frame sent\nI0217 13:19:37.322197 887 log.go:172] (0xc00089eb00) Data frame received for 1\nI0217 13:19:37.322275 887 log.go:172] (0xc0007670e0) (1) Data frame handling\nI0217 13:19:37.322328 887 log.go:172] (0xc0007670e0) (1) Data frame sent\nI0217 13:19:37.322362 887 log.go:172] (0xc00089eb00) (0xc0007670e0) Stream removed, broadcasting: 1\nI0217 13:19:37.322516 887 log.go:172] (0xc00089eb00) (0xc000349ae0) Stream removed, broadcasting: 3\nI0217 13:19:37.322792 887 log.go:172] (0xc00089eb00) (0xc00076c0a0) Stream removed, broadcasting: 5\nI0217 13:19:37.322869 887 log.go:172] (0xc00089eb00) Go away received\nI0217 13:19:37.323001 887 log.go:172] (0xc00089eb00) (0xc0007670e0) Stream removed, broadcasting: 1\nI0217 13:19:37.323061 887 log.go:172] (0xc00089eb00) (0xc000349ae0) Stream removed, broadcasting: 3\nI0217 13:19:37.323126 887 log.go:172] (0xc00089eb00) (0xc00076c0a0) Stream removed, broadcasting: 5\n" Feb 17 13:19:37.329: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 17 13:19:37.329: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 17 13:19:37.336: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Feb 17 13:19:47.346: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 17 13:19:47.346: INFO: Waiting for statefulset status.replicas updated to 0 Feb 17 13:19:47.392: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999399s Feb 17 13:19:48.404: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.973905833s Feb 17 13:19:49.414: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.962226945s Feb 17 13:19:50.426: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.952171727s Feb 17 13:19:51.435: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.939454573s Feb 17 13:19:52.444: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.930956399s Feb 17 13:19:53.453: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.92184734s Feb 17 13:19:54.464: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.912869673s Feb 17 13:19:55.479: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.901633497s Feb 17 13:19:56.493: INFO: Verifying statefulset ss doesn't scale past 1 for another 887.105766ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7735 Feb 17 13:19:57.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7735 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 13:19:58.066: INFO: stderr: "I0217 13:19:57.717725 908 log.go:172] (0xc0008f8630) (0xc0005dca00) Create stream\nI0217 13:19:57.717861 908 log.go:172] (0xc0008f8630) (0xc0005dca00) Stream added, broadcasting: 1\nI0217 13:19:57.726027 908 log.go:172] (0xc0008f8630) Reply frame received for 1\nI0217 13:19:57.726098 908 log.go:172] (0xc0008f8630) (0xc000816000) Create stream\nI0217 13:19:57.726127 908 log.go:172] (0xc0008f8630) (0xc000816000) Stream added, broadcasting: 3\nI0217 13:19:57.728871 908 log.go:172] (0xc0008f8630) Reply frame received for 3\nI0217 13:19:57.728913 908 log.go:172] (0xc0008f8630) (0xc0005dcaa0) Create stream\nI0217 13:19:57.728924 908 log.go:172] (0xc0008f8630) (0xc0005dcaa0) Stream added, broadcasting: 5\nI0217 13:19:57.732925 908 log.go:172] (0xc0008f8630) Reply frame received for 5\nI0217 13:19:57.880453 908 log.go:172] (0xc0008f8630) Data frame received for 5\nI0217 13:19:57.880755 908 log.go:172] (0xc0005dcaa0) (5) Data frame handling\nI0217 13:19:57.880863 908 log.go:172] (0xc0005dcaa0) (5) Data frame sent\nI0217 13:19:57.881074 908 log.go:172] (0xc0008f8630) Data frame received for 3\nI0217 13:19:57.881093 908 log.go:172] (0xc000816000) (3) Data frame handling\nI0217 13:19:57.881108 908 log.go:172] (0xc000816000) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0217 13:19:58.059084 908 log.go:172] (0xc0008f8630) (0xc000816000) Stream removed, broadcasting: 3\nI0217 13:19:58.059342 908 log.go:172] (0xc0008f8630) Data frame received for 1\nI0217 13:19:58.059363 908 log.go:172] (0xc0008f8630) (0xc0005dcaa0) Stream removed, broadcasting: 5\nI0217 13:19:58.059436 908 log.go:172] (0xc0005dca00) (1) Data frame handling\nI0217 13:19:58.059461 908 log.go:172] (0xc0005dca00) (1) Data frame sent\nI0217 13:19:58.059478 908 log.go:172] (0xc0008f8630) (0xc0005dca00) Stream removed, broadcasting: 1\nI0217 13:19:58.059634 908 log.go:172] (0xc0008f8630) Go away received\nI0217 13:19:58.060123 908 log.go:172] (0xc0008f8630) (0xc0005dca00) Stream removed, broadcasting: 1\nI0217 13:19:58.060138 908 log.go:172] (0xc0008f8630) (0xc000816000) Stream removed, broadcasting: 3\nI0217 13:19:58.060148 908 log.go:172] (0xc0008f8630) (0xc0005dcaa0) Stream removed, broadcasting: 5\n" Feb 17 13:19:58.066: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 17 13:19:58.066: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 17 13:19:58.073: INFO: Found 1 stateful pods, waiting for 3 Feb 17 13:20:08.080: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 17 13:20:08.080: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 17 13:20:08.080: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 17 13:20:18.080: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 17 13:20:18.080: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 17 13:20:18.080: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Feb 17 13:20:18.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7735 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 17 13:20:18.675: INFO: stderr: "I0217 13:20:18.250946 928 log.go:172] (0xc000650420) (0xc00067c6e0) Create stream\nI0217 13:20:18.251059 928 log.go:172] (0xc000650420) (0xc00067c6e0) Stream added, broadcasting: 1\nI0217 13:20:18.257617 928 log.go:172] (0xc000650420) Reply frame received for 1\nI0217 13:20:18.257676 928 log.go:172] (0xc000650420) (0xc0001221e0) Create stream\nI0217 13:20:18.257688 928 log.go:172] (0xc000650420) (0xc0001221e0) Stream added, broadcasting: 3\nI0217 13:20:18.259482 928 log.go:172] (0xc000650420) Reply frame received for 3\nI0217 13:20:18.259540 928 log.go:172] (0xc000650420) (0xc00044e8c0) Create stream\nI0217 13:20:18.259561 928 log.go:172] (0xc000650420) (0xc00044e8c0) Stream added, broadcasting: 5\nI0217 13:20:18.262214 928 log.go:172] (0xc000650420) Reply frame received for 5\nI0217 13:20:18.399531 928 log.go:172] (0xc000650420) Data frame received for 3\nI0217 13:20:18.399598 928 log.go:172] (0xc0001221e0) (3) Data frame handling\nI0217 13:20:18.399610 928 log.go:172] (0xc0001221e0) (3) Data frame sent\nI0217 13:20:18.399645 928 log.go:172] (0xc000650420) Data frame received for 5\nI0217 13:20:18.399682 928 log.go:172] (0xc00044e8c0) (5) Data frame handling\nI0217 13:20:18.399721 928 log.go:172] (0xc00044e8c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0217 13:20:18.664965 928 log.go:172] (0xc000650420) Data frame received for 1\nI0217 13:20:18.665119 928 log.go:172] (0xc000650420) (0xc0001221e0) Stream removed, broadcasting: 3\nI0217 13:20:18.665212 928 log.go:172] (0xc000650420) (0xc00044e8c0) Stream removed, broadcasting: 5\nI0217 13:20:18.665357 928 log.go:172] (0xc00067c6e0) (1) Data frame handling\nI0217 13:20:18.665395 928 log.go:172] (0xc00067c6e0) (1) Data frame sent\nI0217 13:20:18.665404 928 log.go:172] (0xc000650420) (0xc00067c6e0) Stream removed, broadcasting: 1\nI0217 13:20:18.665414 928 log.go:172] (0xc000650420) Go away received\nI0217 13:20:18.665989 928 log.go:172] (0xc000650420) (0xc00067c6e0) Stream removed, broadcasting: 1\nI0217 13:20:18.666046 928 log.go:172] (0xc000650420) (0xc0001221e0) Stream removed, broadcasting: 3\nI0217 13:20:18.666085 928 log.go:172] (0xc000650420) (0xc00044e8c0) Stream removed, broadcasting: 5\n" Feb 17 13:20:18.676: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 17 13:20:18.676: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 17 13:20:18.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7735 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 17 13:20:19.181: INFO: stderr: "I0217 13:20:18.833872 947 log.go:172] (0xc0008ee0b0) (0xc00084e640) Create stream\nI0217 13:20:18.834028 947 log.go:172] (0xc0008ee0b0) (0xc00084e640) Stream added, broadcasting: 1\nI0217 13:20:18.837328 947 log.go:172] (0xc0008ee0b0) Reply frame received for 1\nI0217 13:20:18.837382 947 log.go:172] (0xc0008ee0b0) (0xc0003fe3c0) Create stream\nI0217 13:20:18.837388 947 log.go:172] (0xc0008ee0b0) (0xc0003fe3c0) Stream added, broadcasting: 3\nI0217 13:20:18.838417 947 log.go:172] (0xc0008ee0b0) Reply frame received for 3\nI0217 13:20:18.838448 947 log.go:172] (0xc0008ee0b0) (0xc00084e6e0) Create stream\nI0217 13:20:18.838454 947 log.go:172] (0xc0008ee0b0) (0xc00084e6e0) Stream added, broadcasting: 5\nI0217 13:20:18.839527 947 log.go:172] (0xc0008ee0b0) Reply frame received for 5\nI0217 13:20:18.997217 947 log.go:172] (0xc0008ee0b0) Data frame received for 5\nI0217 13:20:18.997249 947 log.go:172] (0xc00084e6e0) (5) Data frame handling\nI0217 13:20:18.997275 947 log.go:172] (0xc00084e6e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0217 13:20:19.092549 947 log.go:172] (0xc0008ee0b0) Data frame received for 3\nI0217 13:20:19.092592 947 log.go:172] (0xc0003fe3c0) (3) Data frame handling\nI0217 13:20:19.092609 947 log.go:172] (0xc0003fe3c0) (3) Data frame sent\nI0217 13:20:19.175017 947 log.go:172] (0xc0008ee0b0) Data frame received for 1\nI0217 13:20:19.175128 947 log.go:172] (0xc00084e640) (1) Data frame handling\nI0217 13:20:19.175142 947 log.go:172] (0xc00084e640) (1) Data frame sent\nI0217 13:20:19.175154 947 log.go:172] (0xc0008ee0b0) (0xc00084e640) Stream removed, broadcasting: 1\nI0217 13:20:19.175168 947 log.go:172] (0xc0008ee0b0) (0xc0003fe3c0) Stream removed, broadcasting: 3\nI0217 13:20:19.175187 947 log.go:172] (0xc0008ee0b0) (0xc00084e6e0) Stream removed, broadcasting: 5\nI0217 13:20:19.175792 947 log.go:172] (0xc0008ee0b0) (0xc00084e640) Stream removed, broadcasting: 1\nI0217 13:20:19.175824 947 log.go:172] (0xc0008ee0b0) (0xc0003fe3c0) Stream removed, broadcasting: 3\nI0217 13:20:19.175831 947 log.go:172] (0xc0008ee0b0) (0xc00084e6e0) Stream removed, broadcasting: 5\nI0217 13:20:19.176081 947 log.go:172] (0xc0008ee0b0) Go away received\n" Feb 17 13:20:19.181: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 17 13:20:19.181: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 17 13:20:19.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7735 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 17 13:20:19.663: INFO: stderr: "I0217 13:20:19.363152 962 log.go:172] (0xc000a16370) (0xc0009ec780) Create stream\nI0217 13:20:19.363320 962 log.go:172] (0xc000a16370) (0xc0009ec780) Stream added, broadcasting: 1\nI0217 13:20:19.373340 962 log.go:172] (0xc000a16370) Reply frame received for 1\nI0217 13:20:19.373374 962 log.go:172] (0xc000a16370) (0xc000642320) Create stream\nI0217 13:20:19.373389 962 log.go:172] (0xc000a16370) (0xc000642320) Stream added, broadcasting: 3\nI0217 13:20:19.375779 962 log.go:172] (0xc000a16370) Reply frame received for 3\nI0217 13:20:19.375808 962 log.go:172] (0xc000a16370) (0xc000209b80) Create stream\nI0217 13:20:19.375820 962 log.go:172] (0xc000a16370) (0xc000209b80) Stream added, broadcasting: 5\nI0217 13:20:19.377394 962 log.go:172] (0xc000a16370) Reply frame received for 5\nI0217 13:20:19.493322 962 log.go:172] (0xc000a16370) Data frame received for 5\nI0217 13:20:19.493387 962 log.go:172] (0xc000209b80) (5) Data frame handling\nI0217 13:20:19.493406 962 log.go:172] (0xc000209b80) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0217 13:20:19.540775 962 log.go:172] (0xc000a16370) Data frame received for 3\nI0217 13:20:19.540810 962 log.go:172] (0xc000642320) (3) Data frame handling\nI0217 13:20:19.540827 962 log.go:172] (0xc000642320) (3) Data frame sent\nI0217 13:20:19.654208 962 log.go:172] (0xc000a16370) Data frame received for 1\nI0217 13:20:19.654291 962 log.go:172] (0xc0009ec780) (1) Data frame handling\nI0217 13:20:19.654305 962 log.go:172] (0xc0009ec780) (1) Data frame sent\nI0217 13:20:19.654317 962 log.go:172] (0xc000a16370) (0xc0009ec780) Stream removed, broadcasting: 1\nI0217 13:20:19.654354 962 log.go:172] (0xc000a16370) (0xc000642320) Stream removed, broadcasting: 3\nI0217 13:20:19.654386 962 log.go:172] (0xc000a16370) (0xc000209b80) Stream removed, broadcasting: 5\nI0217 13:20:19.654420 962 log.go:172] (0xc000a16370) Go away received\nI0217 13:20:19.654748 962 log.go:172] (0xc000a16370) (0xc0009ec780) Stream removed, broadcasting: 1\nI0217 13:20:19.654779 962 log.go:172] (0xc000a16370) (0xc000642320) Stream removed, broadcasting: 3\nI0217 13:20:19.654796 962 log.go:172] (0xc000a16370) (0xc000209b80) Stream removed, broadcasting: 5\n" Feb 17 13:20:19.664: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 17 13:20:19.664: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 17 13:20:19.664: INFO: Waiting for statefulset status.replicas updated to 0 Feb 17 13:20:19.674: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Feb 17 13:20:29.688: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 17 13:20:29.688: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Feb 17 13:20:29.688: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Feb 17 13:20:29.714: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999695s Feb 17 13:20:30.727: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.984656522s Feb 17 13:20:31.735: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.971319246s Feb 17 13:20:32.746: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.962853895s Feb 17 13:20:33.761: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.952556734s Feb 17 13:20:34.771: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.937201462s Feb 17 13:20:36.349: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.926914203s Feb 17 13:20:37.368: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.348843879s Feb 17 13:20:38.384: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.329657393s Feb 17 13:20:39.400: INFO: Verifying statefulset ss doesn't scale past 3 for another 313.934397ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7735 Feb 17 13:20:40.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7735 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 13:20:40.981: INFO: stderr: "I0217 13:20:40.658327 984 log.go:172] (0xc0007ca580) (0xc000522a00) Create stream\nI0217 13:20:40.658684 984 log.go:172] (0xc0007ca580) (0xc000522a00) Stream added, broadcasting: 1\nI0217 13:20:40.665635 984 log.go:172] (0xc0007ca580) Reply frame received for 1\nI0217 13:20:40.665706 984 log.go:172] (0xc0007ca580) (0xc0008a4000) Create stream\nI0217 13:20:40.665720 984 log.go:172] (0xc0007ca580) (0xc0008a4000) Stream added, broadcasting: 3\nI0217 13:20:40.667259 984 log.go:172] (0xc0007ca580) Reply frame received for 3\nI0217 13:20:40.667289 984 log.go:172] (0xc0007ca580) (0xc0008a40a0) Create stream\nI0217 13:20:40.667302 984 log.go:172] (0xc0007ca580) (0xc0008a40a0) Stream added, broadcasting: 5\nI0217 13:20:40.670886 984 log.go:172] (0xc0007ca580) Reply frame received for 5\nI0217 13:20:40.834074 984 log.go:172] (0xc0007ca580) Data frame received for 5\nI0217 13:20:40.834131 984 log.go:172] (0xc0007ca580) Data frame received for 3\nI0217 13:20:40.834153 984 log.go:172] (0xc0008a4000) (3) Data frame handling\nI0217 13:20:40.834163 984 log.go:172] (0xc0008a4000) (3) Data frame sent\nI0217 13:20:40.834184 984 log.go:172] (0xc0008a40a0) (5) Data frame handling\nI0217 13:20:40.834193 984 log.go:172] (0xc0008a40a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0217 13:20:40.973105 984 log.go:172] (0xc0007ca580) (0xc0008a4000) Stream removed, broadcasting: 3\nI0217 13:20:40.973838 984 log.go:172] (0xc0007ca580) Data frame received for 1\nI0217 13:20:40.974023 984 log.go:172] (0xc0007ca580) (0xc0008a40a0) Stream removed, broadcasting: 5\nI0217 13:20:40.974244 984 log.go:172] (0xc000522a00) (1) Data frame handling\nI0217 13:20:40.974354 984 log.go:172] (0xc000522a00) (1) Data frame sent\nI0217 13:20:40.974386 984 log.go:172] (0xc0007ca580) (0xc000522a00) Stream removed, broadcasting: 1\nI0217 13:20:40.974444 984 log.go:172] (0xc0007ca580) Go away received\nI0217 13:20:40.974851 984 log.go:172] (0xc0007ca580) (0xc000522a00) Stream removed, broadcasting: 1\nI0217 13:20:40.974880 984 log.go:172] (0xc0007ca580) (0xc0008a4000) Stream removed, broadcasting: 3\nI0217 13:20:40.974885 984 log.go:172] (0xc0007ca580) (0xc0008a40a0) Stream removed, broadcasting: 5\n" Feb 17 13:20:40.981: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 17 13:20:40.981: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 17 13:20:40.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7735 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 13:20:41.297: INFO: stderr: "I0217 13:20:41.119706 1003 log.go:172] (0xc00093e2c0) (0xc00078c640) Create stream\nI0217 13:20:41.119798 1003 log.go:172] (0xc00093e2c0) (0xc00078c640) Stream added, broadcasting: 1\nI0217 13:20:41.121654 1003 log.go:172] (0xc00093e2c0) Reply frame received for 1\nI0217 13:20:41.121676 1003 log.go:172] (0xc00093e2c0) (0xc00085d2c0) Create stream\nI0217 13:20:41.121682 1003 log.go:172] (0xc00093e2c0) (0xc00085d2c0) Stream added, broadcasting: 3\nI0217 13:20:41.122766 1003 log.go:172] (0xc00093e2c0) Reply frame received for 3\nI0217 13:20:41.122819 1003 log.go:172] (0xc00093e2c0) (0xc000208000) Create stream\nI0217 13:20:41.122833 1003 log.go:172] (0xc00093e2c0) (0xc000208000) Stream added, broadcasting: 5\nI0217 13:20:41.124998 1003 log.go:172] (0xc00093e2c0) Reply frame received for 5\nI0217 13:20:41.204393 1003 log.go:172] (0xc00093e2c0) Data frame received for 5\nI0217 13:20:41.204564 1003 log.go:172] (0xc000208000) (5) Data frame handling\nI0217 13:20:41.204587 1003 log.go:172] (0xc000208000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0217 13:20:41.204607 1003 log.go:172] (0xc00093e2c0) Data frame received for 3\nI0217 13:20:41.204620 1003 log.go:172] (0xc00085d2c0) (3) Data frame handling\nI0217 13:20:41.204632 1003 log.go:172] (0xc00085d2c0) (3) Data frame sent\nI0217 13:20:41.290795 1003 log.go:172] (0xc00093e2c0) (0xc00085d2c0) Stream removed, broadcasting: 3\nI0217 13:20:41.290935 1003 log.go:172] (0xc00093e2c0) Data frame received for 1\nI0217 13:20:41.290957 1003 log.go:172] (0xc00078c640) (1) Data frame handling\nI0217 13:20:41.290973 1003 log.go:172] (0xc00078c640) (1) Data frame sent\nI0217 13:20:41.291065 1003 log.go:172] (0xc00093e2c0) (0xc00078c640) Stream removed, broadcasting: 1\nI0217 13:20:41.291217 1003 log.go:172] (0xc00093e2c0) (0xc000208000) Stream removed, broadcasting: 5\nI0217 13:20:41.291328 1003 log.go:172] (0xc00093e2c0) Go away received\nI0217 13:20:41.291520 1003 log.go:172] (0xc00093e2c0) (0xc00078c640) Stream removed, broadcasting: 1\nI0217 13:20:41.291533 1003 log.go:172] (0xc00093e2c0) (0xc00085d2c0) Stream removed, broadcasting: 3\nI0217 13:20:41.291544 1003 log.go:172] (0xc00093e2c0) (0xc000208000) Stream removed, broadcasting: 5\n" Feb 17 13:20:41.297: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 17 13:20:41.298: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 17 13:20:41.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7735 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 13:20:41.772: INFO: stderr: "I0217 13:20:41.472274 1022 log.go:172] (0xc00010ef20) (0xc0006aaa00) Create stream\nI0217 13:20:41.472363 1022 log.go:172] (0xc00010ef20) (0xc0006aaa00) Stream added, broadcasting: 1\nI0217 13:20:41.480912 1022 log.go:172] (0xc00010ef20) Reply frame received for 1\nI0217 13:20:41.480983 1022 log.go:172] (0xc00010ef20) (0xc00097a000) Create stream\nI0217 13:20:41.480999 1022 log.go:172] (0xc00010ef20) (0xc00097a000) Stream added, broadcasting: 3\nI0217 13:20:41.483253 1022 log.go:172] (0xc00010ef20) Reply frame received for 3\nI0217 13:20:41.483284 1022 log.go:172] (0xc00010ef20) (0xc0006aaaa0) Create stream\nI0217 13:20:41.483294 1022 log.go:172] (0xc00010ef20) (0xc0006aaaa0) Stream added, broadcasting: 5\nI0217 13:20:41.484968 1022 log.go:172] (0xc00010ef20) Reply frame received for 5\nI0217 13:20:41.627107 1022 log.go:172] (0xc00010ef20) Data frame received for 5\nI0217 13:20:41.627185 1022 log.go:172] (0xc0006aaaa0) (5) Data frame handling\nI0217 13:20:41.627196 1022 log.go:172] (0xc0006aaaa0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0217 13:20:41.627204 1022 log.go:172] (0xc00010ef20) Data frame received for 3\nI0217 13:20:41.627207 1022 log.go:172] (0xc00097a000) (3) Data frame handling\nI0217 13:20:41.627211 1022 log.go:172] (0xc00097a000) (3) Data frame sent\nI0217 13:20:41.765242 1022 log.go:172] (0xc00010ef20) Data frame received for 1\nI0217 13:20:41.765319 1022 log.go:172] (0xc00010ef20) (0xc0006aaaa0) Stream removed, broadcasting: 5\nI0217 13:20:41.765349 1022 log.go:172] (0xc0006aaa00) (1) Data frame handling\nI0217 13:20:41.765360 1022 log.go:172] (0xc0006aaa00) (1) Data frame sent\nI0217 13:20:41.765417 1022 log.go:172] (0xc00010ef20) (0xc00097a000) Stream removed, broadcasting: 3\nI0217 13:20:41.765440 1022 log.go:172] (0xc00010ef20) (0xc0006aaa00) Stream removed, broadcasting: 1\nI0217 13:20:41.765451 1022 log.go:172] (0xc00010ef20) Go away received\nI0217 13:20:41.765883 1022 log.go:172] (0xc00010ef20) (0xc0006aaa00) Stream removed, broadcasting: 1\nI0217 13:20:41.765903 1022 log.go:172] (0xc00010ef20) (0xc00097a000) Stream removed, broadcasting: 3\nI0217 13:20:41.765914 1022 log.go:172] (0xc00010ef20) (0xc0006aaaa0) Stream removed, broadcasting: 5\n" Feb 17 13:20:41.772: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 17 13:20:41.772: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 17 13:20:41.772: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Feb 17 13:21:21.814: INFO: Deleting all statefulset in ns statefulset-7735 Feb 17 13:21:21.821: INFO: Scaling statefulset ss to 0 Feb 17 13:21:21.835: INFO: Waiting for statefulset status.replicas updated to 0 Feb 17 13:21:21.838: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 13:21:21.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7735" for this suite. Feb 17 13:21:27.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 13:21:28.061: INFO: namespace statefulset-7735 deletion completed in 6.185676321s • [SLOW TEST:123.551 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 13:21:28.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Feb 17 13:21:28.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Feb 17 13:21:28.316: INFO: stderr: "" Feb 17 13:21:28.316: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 13:21:28.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1464" for this suite. Feb 17 13:21:34.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 13:21:34.492: INFO: namespace kubectl-1464 deletion completed in 6.169204483s • [SLOW TEST:6.429 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 13:21:34.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-1730a06d-d7b2-4fea-9166-a160e17adbc5 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 13:21:46.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8018" for this suite. Feb 17 13:22:08.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 13:22:08.928: INFO: namespace configmap-8018 deletion completed in 22.142416038s • [SLOW TEST:34.435 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 13:22:08.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 17 13:22:09.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-3391' Feb 17 13:22:09.181: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 17 13:22:09.182: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Feb 17 13:22:09.198: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Feb 17 13:22:09.206: INFO: scanned /root for discovery docs: Feb 17 13:22:09.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-3391' Feb 17 13:22:33.670: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Feb 17 13:22:33.671: INFO: stdout: "Created e2e-test-nginx-rc-96b478a5c6bc151cd79cb1606a3898fc\nScaling up e2e-test-nginx-rc-96b478a5c6bc151cd79cb1606a3898fc from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-96b478a5c6bc151cd79cb1606a3898fc up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-96b478a5c6bc151cd79cb1606a3898fc to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Feb 17 13:22:33.671: INFO: stdout: "Created e2e-test-nginx-rc-96b478a5c6bc151cd79cb1606a3898fc\nScaling up e2e-test-nginx-rc-96b478a5c6bc151cd79cb1606a3898fc from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-96b478a5c6bc151cd79cb1606a3898fc up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-96b478a5c6bc151cd79cb1606a3898fc to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Feb 17 13:22:33.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-3391' Feb 17 13:22:33.805: INFO: stderr: "" Feb 17 13:22:33.805: INFO: stdout: "e2e-test-nginx-rc-96b478a5c6bc151cd79cb1606a3898fc-cd7w9 " Feb 17 13:22:33.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-96b478a5c6bc151cd79cb1606a3898fc-cd7w9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3391' Feb 17 13:22:33.923: INFO: stderr: "" Feb 17 13:22:33.923: INFO: stdout: "true" Feb 17 13:22:33.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-96b478a5c6bc151cd79cb1606a3898fc-cd7w9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3391' Feb 17 13:22:34.018: INFO: stderr: "" Feb 17 13:22:34.018: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Feb 17 13:22:34.018: INFO: e2e-test-nginx-rc-96b478a5c6bc151cd79cb1606a3898fc-cd7w9 is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Feb 17 13:22:34.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-3391' Feb 17 13:22:34.137: INFO: stderr: "" Feb 17 13:22:34.137: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 13:22:34.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3391" for this suite. Feb 17 13:22:40.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 13:22:40.378: INFO: namespace kubectl-3391 deletion completed in 6.15809881s • [SLOW TEST:31.450 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 13:22:40.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 13:22:48.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1826" for this suite. Feb 17 13:23:40.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 13:23:40.782: INFO: namespace kubelet-test-1826 deletion completed in 52.149478065s • [SLOW TEST:60.404 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 13:23:40.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 13:24:40.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8547" for this suite. Feb 17 13:25:03.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 13:25:03.130: INFO: namespace container-probe-8547 deletion completed in 22.1559204s • [SLOW TEST:82.345 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 13:25:03.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Feb 17 13:25:03.371: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5165" to be "success or failure" Feb 17 13:25:03.385: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 13.446756ms Feb 17 13:25:05.393: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021702267s Feb 17 13:25:07.400: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028329141s Feb 17 13:25:09.409: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037398082s Feb 17 13:25:11.421: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049701522s Feb 17 13:25:13.446: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.074710639s Feb 17 13:25:15.463: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.091543642s STEP: Saw pod success Feb 17 13:25:15.463: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Feb 17 13:25:15.469: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: STEP: delete the pod Feb 17 13:25:15.701: INFO: Waiting for pod pod-host-path-test to disappear Feb 17 13:25:15.736: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 13:25:15.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-5165" for this suite. Feb 17 13:25:21.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 13:25:21.956: INFO: namespace hostpath-5165 deletion completed in 6.211671692s • [SLOW TEST:18.826 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 13:25:21.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Feb 17 13:25:22.024: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 17 13:25:22.064: INFO: Waiting for terminating namespaces to be deleted... Feb 17 13:25:22.068: INFO: Logging pods the kubelet thinks is on node iruya-node before test Feb 17 13:25:22.079: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Feb 17 13:25:22.079: INFO: Container kube-proxy ready: true, restart count 0 Feb 17 13:25:22.079: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Feb 17 13:25:22.079: INFO: Container weave ready: true, restart count 0 Feb 17 13:25:22.079: INFO: Container weave-npc ready: true, restart count 0 Feb 17 13:25:22.079: INFO: kube-bench-j7kcs from default started at 2020-02-11 06:42:30 +0000 UTC (1 container statuses recorded) Feb 17 13:25:22.079: INFO: Container kube-bench ready: false, restart count 0 Feb 17 13:25:22.079: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Feb 17 13:25:22.088: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Feb 17 13:25:22.088: INFO: Container etcd ready: true, restart count 0 Feb 17 13:25:22.088: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Feb 17 13:25:22.088: INFO: Container weave ready: true, restart count 0 Feb 17 13:25:22.088: INFO: Container weave-npc ready: true, restart count 0 Feb 17 13:25:22.088: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Feb 17 13:25:22.088: INFO: Container coredns ready: true, restart count 0 Feb 17 13:25:22.088: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Feb 17 13:25:22.088: INFO: Container kube-controller-manager ready: true, restart count 23 Feb 17 13:25:22.088: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Feb 17 13:25:22.088: INFO: Container kube-proxy ready: true, restart count 0 Feb 17 13:25:22.088: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Feb 17 13:25:22.088: INFO: Container kube-apiserver ready: true, restart count 0 Feb 17 13:25:22.088: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Feb 17 13:25:22.088: INFO: Container kube-scheduler ready: true, restart count 15 Feb 17 13:25:22.088: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Feb 17 13:25:22.088: INFO: Container coredns ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-node STEP: verifying the node has the label node iruya-server-sfge57q7djm7 Feb 17 13:25:22.230: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Feb 17 13:25:22.230: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Feb 17 13:25:22.230: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7 Feb 17 13:25:22.230: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7 Feb 17 13:25:22.230: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7 Feb 17 13:25:22.230: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7 Feb 17 13:25:22.230: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node Feb 17 13:25:22.230: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Feb 17 13:25:22.230: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7 Feb 17 13:25:22.230: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-722211bc-b25c-40a8-9d5e-825c4c75eaeb.15f4336f54d48720], Reason = [Scheduled], Message = [Successfully assigned sched-pred-507/filler-pod-722211bc-b25c-40a8-9d5e-825c4c75eaeb to iruya-node] STEP: Considering event: Type = [Normal], Name = [filler-pod-722211bc-b25c-40a8-9d5e-825c4c75eaeb.15f4337049fe5c08], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-722211bc-b25c-40a8-9d5e-825c4c75eaeb.15f433717a13e7ca], Reason = [Created], Message = [Created container filler-pod-722211bc-b25c-40a8-9d5e-825c4c75eaeb] STEP: Considering event: Type = [Normal], Name = [filler-pod-722211bc-b25c-40a8-9d5e-825c4c75eaeb.15f43371af1a8ace], Reason = [Started], Message = [Started container filler-pod-722211bc-b25c-40a8-9d5e-825c4c75eaeb] STEP: Considering event: Type = [Normal], Name = [filler-pod-999dbbe2-c0ac-46fd-bfb4-810453619b1f.15f4336f56f6499b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-507/filler-pod-999dbbe2-c0ac-46fd-bfb4-810453619b1f to iruya-server-sfge57q7djm7] STEP: Considering event: Type = [Normal], Name = [filler-pod-999dbbe2-c0ac-46fd-bfb4-810453619b1f.15f43370b949fc99], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-999dbbe2-c0ac-46fd-bfb4-810453619b1f.15f43371afab043c], Reason = [Created], Message = [Created container filler-pod-999dbbe2-c0ac-46fd-bfb4-810453619b1f] STEP: Considering event: Type = [Normal], Name = [filler-pod-999dbbe2-c0ac-46fd-bfb4-810453619b1f.15f43371d018fd93], Reason = [Started], Message = [Started container filler-pod-999dbbe2-c0ac-46fd-bfb4-810453619b1f] STEP: Considering event: Type = [Warning], Name = [additional-pod.15f433722471ee1e], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] STEP: removing the label node off the node iruya-node STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-server-sfge57q7djm7 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 13:25:35.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-507" for this suite. Feb 17 13:25:43.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 13:25:43.663: INFO: namespace sched-pred-507 deletion completed in 8.131824665s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:21.706 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 13:25:43.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 17 13:25:44.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-3565' Feb 17 13:25:45.040: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 17 13:25:45.040: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Feb 17 13:25:45.210: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-9rqvk] Feb 17 13:25:45.210: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-9rqvk" in namespace "kubectl-3565" to be "running and ready" Feb 17 13:25:45.214: INFO: Pod "e2e-test-nginx-rc-9rqvk": Phase="Pending", Reason="", readiness=false. Elapsed: 3.808961ms Feb 17 13:25:47.230: INFO: Pod "e2e-test-nginx-rc-9rqvk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019147462s Feb 17 13:25:49.240: INFO: Pod "e2e-test-nginx-rc-9rqvk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029573172s Feb 17 13:25:51.247: INFO: Pod "e2e-test-nginx-rc-9rqvk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036247579s Feb 17 13:25:53.260: INFO: Pod "e2e-test-nginx-rc-9rqvk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050107126s Feb 17 13:25:55.272: INFO: Pod "e2e-test-nginx-rc-9rqvk": Phase="Running", Reason="", readiness=true. Elapsed: 10.062021412s Feb 17 13:25:55.272: INFO: Pod "e2e-test-nginx-rc-9rqvk" satisfied condition "running and ready" Feb 17 13:25:55.273: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-9rqvk] Feb 17 13:25:55.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-3565' Feb 17 13:25:55.437: INFO: stderr: "" Feb 17 13:25:55.437: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Feb 17 13:25:55.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-3565' Feb 17 13:25:55.621: INFO: stderr: "" Feb 17 13:25:55.621: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 13:25:55.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3565" for this suite. Feb 17 13:26:17.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 13:26:17.809: INFO: namespace kubectl-3565 deletion completed in 22.182389801s • [SLOW TEST:34.145 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 13:26:17.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 17 13:26:17.999: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1e8bd325-a870-4365-ad1e-439f1f9788d2" in namespace "downward-api-3079" to be "success or failure" Feb 17 13:26:18.028: INFO: Pod "downwardapi-volume-1e8bd325-a870-4365-ad1e-439f1f9788d2": Phase="Pending", Reason="", readiness=false. Elapsed: 28.977776ms Feb 17 13:26:20.036: INFO: Pod "downwardapi-volume-1e8bd325-a870-4365-ad1e-439f1f9788d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037269204s Feb 17 13:26:22.049: INFO: Pod "downwardapi-volume-1e8bd325-a870-4365-ad1e-439f1f9788d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05045419s Feb 17 13:26:24.057: INFO: Pod "downwardapi-volume-1e8bd325-a870-4365-ad1e-439f1f9788d2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058108598s Feb 17 13:26:26.068: INFO: Pod "downwardapi-volume-1e8bd325-a870-4365-ad1e-439f1f9788d2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069643748s Feb 17 13:26:28.076: INFO: Pod "downwardapi-volume-1e8bd325-a870-4365-ad1e-439f1f9788d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.077495757s STEP: Saw pod success Feb 17 13:26:28.076: INFO: Pod "downwardapi-volume-1e8bd325-a870-4365-ad1e-439f1f9788d2" satisfied condition "success or failure" Feb 17 13:26:28.081: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-1e8bd325-a870-4365-ad1e-439f1f9788d2 container client-container: STEP: delete the pod Feb 17 13:26:28.137: INFO: Waiting for pod downwardapi-volume-1e8bd325-a870-4365-ad1e-439f1f9788d2 to disappear Feb 17 13:26:28.242: INFO: Pod downwardapi-volume-1e8bd325-a870-4365-ad1e-439f1f9788d2 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 13:26:28.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3079" for this suite. Feb 17 13:26:34.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 13:26:34.382: INFO: namespace downward-api-3079 deletion completed in 6.134159027s • [SLOW TEST:16.572 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 13:26:34.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Feb 17 13:26:45.100: INFO: Successfully updated pod "labelsupdate8e179b83-135c-4dcf-ab1e-b48c3bbb8e29" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 13:26:47.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-91" for this suite. Feb 17 13:27:09.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 13:27:09.417: INFO: namespace projected-91 deletion completed in 22.174601293s • [SLOW TEST:35.035 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 13:27:09.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 17 13:27:18.787: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 13:27:18.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-659" for this suite. Feb 17 13:27:24.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 13:27:25.039: INFO: namespace container-runtime-659 deletion completed in 6.189789888s • [SLOW TEST:15.621 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 13:27:25.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 17 13:27:41.214: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 17 13:27:41.224: INFO: Pod pod-with-poststart-http-hook still exists Feb 17 13:27:43.224: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 17 13:27:43.234: INFO: Pod pod-with-poststart-http-hook still exists Feb 17 13:27:45.224: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 17 13:27:45.232: INFO: Pod pod-with-poststart-http-hook still exists Feb 17 13:27:47.225: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 17 13:27:47.234: INFO: Pod pod-with-poststart-http-hook still exists Feb 17 13:27:49.225: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 17 13:27:49.233: INFO: Pod pod-with-poststart-http-hook still exists Feb 17 13:27:51.224: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 17 13:27:51.232: INFO: Pod pod-with-poststart-http-hook still exists Feb 17 13:27:53.224: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 17 13:27:53.240: INFO: Pod pod-with-poststart-http-hook still exists Feb 17 13:27:55.224: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 17 13:27:55.233: INFO: Pod pod-with-poststart-http-hook still exists Feb 17 13:27:57.224: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 17 13:27:57.232: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 13:27:57.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6990" for this suite. Feb 17 13:28:19.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 13:28:19.405: INFO: namespace container-lifecycle-hook-6990 deletion completed in 22.166561533s • [SLOW TEST:54.366 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 13:28:19.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-5p76 STEP: Creating a pod to test atomic-volume-subpath Feb 17 13:28:19.560: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-5p76" in namespace "subpath-7607" to be "success or failure" Feb 17 13:28:19.583: INFO: Pod "pod-subpath-test-projected-5p76": Phase="Pending", Reason="", readiness=false. Elapsed: 22.22167ms Feb 17 13:28:21.597: INFO: Pod "pod-subpath-test-projected-5p76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036234353s Feb 17 13:28:23.625: INFO: Pod "pod-subpath-test-projected-5p76": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064572155s Feb 17 13:28:25.639: INFO: Pod "pod-subpath-test-projected-5p76": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078215967s Feb 17 13:28:27.650: INFO: Pod "pod-subpath-test-projected-5p76": Phase="Pending", Reason="", readiness=false. Elapsed: 8.089312299s Feb 17 13:28:29.659: INFO: Pod "pod-subpath-test-projected-5p76": Phase="Running", Reason="", readiness=true. Elapsed: 10.098803377s Feb 17 13:28:31.667: INFO: Pod "pod-subpath-test-projected-5p76": Phase="Running", Reason="", readiness=true. Elapsed: 12.106887616s Feb 17 13:28:33.677: INFO: Pod "pod-subpath-test-projected-5p76": Phase="Running", Reason="", readiness=true. Elapsed: 14.116955211s Feb 17 13:28:35.687: INFO: Pod "pod-subpath-test-projected-5p76": Phase="Running", Reason="", readiness=true. Elapsed: 16.126616426s Feb 17 13:28:37.698: INFO: Pod "pod-subpath-test-projected-5p76": Phase="Running", Reason="", readiness=true. Elapsed: 18.137396871s Feb 17 13:28:39.706: INFO: Pod "pod-subpath-test-projected-5p76": Phase="Running", Reason="", readiness=true. Elapsed: 20.145268733s Feb 17 13:28:41.715: INFO: Pod "pod-subpath-test-projected-5p76": Phase="Running", Reason="", readiness=true. Elapsed: 22.154918339s Feb 17 13:28:43.726: INFO: Pod "pod-subpath-test-projected-5p76": Phase="Running", Reason="", readiness=true. Elapsed: 24.165870928s Feb 17 13:28:45.740: INFO: Pod "pod-subpath-test-projected-5p76": Phase="Running", Reason="", readiness=true. Elapsed: 26.179341859s Feb 17 13:28:47.749: INFO: Pod "pod-subpath-test-projected-5p76": Phase="Running", Reason="", readiness=true. Elapsed: 28.188108997s Feb 17 13:28:49.757: INFO: Pod "pod-subpath-test-projected-5p76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.196668301s STEP: Saw pod success Feb 17 13:28:49.757: INFO: Pod "pod-subpath-test-projected-5p76" satisfied condition "success or failure" Feb 17 13:28:49.761: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-5p76 container test-container-subpath-projected-5p76: STEP: delete the pod Feb 17 13:28:49.818: INFO: Waiting for pod pod-subpath-test-projected-5p76 to disappear Feb 17 13:28:49.825: INFO: Pod pod-subpath-test-projected-5p76 no longer exists STEP: Deleting pod pod-subpath-test-projected-5p76 Feb 17 13:28:49.825: INFO: Deleting pod "pod-subpath-test-projected-5p76" in namespace "subpath-7607" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 13:28:49.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7607" for this suite. Feb 17 13:28:55.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 13:28:56.004: INFO: namespace subpath-7607 deletion completed in 6.169231137s • [SLOW TEST:36.598 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 13:28:56.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 17 13:28:56.089: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/:
alternatives.log
alternatives.l... (200; 12.364346ms)
Feb 17 13:28:56.096: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.280737ms)
Feb 17 13:28:56.103: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.894191ms)
Feb 17 13:28:56.125: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 21.889118ms)
Feb 17 13:28:56.171: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 46.272698ms)
Feb 17 13:28:56.181: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.999834ms)
Feb 17 13:28:56.192: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.403826ms)
Feb 17 13:28:56.201: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.429247ms)
Feb 17 13:28:56.209: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.203861ms)
Feb 17 13:28:56.219: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.608988ms)
Feb 17 13:28:56.227: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.550512ms)
Feb 17 13:28:56.235: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.517532ms)
Feb 17 13:28:56.243: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.585007ms)
Feb 17 13:28:56.252: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.926536ms)
Feb 17 13:28:56.266: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.188665ms)
Feb 17 13:28:56.274: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.592488ms)
Feb 17 13:28:56.281: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.242459ms)
Feb 17 13:28:56.288: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.490244ms)
Feb 17 13:28:56.293: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.224849ms)
Feb 17 13:28:56.300: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.633874ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:28:56.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-8790" for this suite.
Feb 17 13:29:02.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:29:02.495: INFO: namespace proxy-8790 deletion completed in 6.187170164s

• [SLOW TEST:6.491 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:29:02.496: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 17 13:29:02.624: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0dd9fb6a-80c5-4a1a-847a-5d24935d0c70" in namespace "projected-7747" to be "success or failure"
Feb 17 13:29:02.639: INFO: Pod "downwardapi-volume-0dd9fb6a-80c5-4a1a-847a-5d24935d0c70": Phase="Pending", Reason="", readiness=false. Elapsed: 14.663481ms
Feb 17 13:29:04.651: INFO: Pod "downwardapi-volume-0dd9fb6a-80c5-4a1a-847a-5d24935d0c70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026952135s
Feb 17 13:29:06.661: INFO: Pod "downwardapi-volume-0dd9fb6a-80c5-4a1a-847a-5d24935d0c70": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036909872s
Feb 17 13:29:08.672: INFO: Pod "downwardapi-volume-0dd9fb6a-80c5-4a1a-847a-5d24935d0c70": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04788033s
Feb 17 13:29:10.683: INFO: Pod "downwardapi-volume-0dd9fb6a-80c5-4a1a-847a-5d24935d0c70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058642324s
STEP: Saw pod success
Feb 17 13:29:10.683: INFO: Pod "downwardapi-volume-0dd9fb6a-80c5-4a1a-847a-5d24935d0c70" satisfied condition "success or failure"
Feb 17 13:29:10.687: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-0dd9fb6a-80c5-4a1a-847a-5d24935d0c70 container client-container: 
STEP: delete the pod
Feb 17 13:29:10.720: INFO: Waiting for pod downwardapi-volume-0dd9fb6a-80c5-4a1a-847a-5d24935d0c70 to disappear
Feb 17 13:29:10.759: INFO: Pod downwardapi-volume-0dd9fb6a-80c5-4a1a-847a-5d24935d0c70 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:29:10.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7747" for this suite.
Feb 17 13:29:16.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:29:16.952: INFO: namespace projected-7747 deletion completed in 6.186836288s

• [SLOW TEST:14.456 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:29:16.952: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 17 13:29:17.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-4091'
Feb 17 13:29:17.158: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 17 13:29:17.158: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Feb 17 13:29:19.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-4091'
Feb 17 13:29:19.349: INFO: stderr: ""
Feb 17 13:29:19.349: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:29:19.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4091" for this suite.
Feb 17 13:29:25.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:29:25.562: INFO: namespace kubectl-4091 deletion completed in 6.169096633s

• [SLOW TEST:8.610 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:29:25.563: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-ea3260d6-0fba-4673-9966-7a5103e5b22e
STEP: Creating a pod to test consume secrets
Feb 17 13:29:25.700: INFO: Waiting up to 5m0s for pod "pod-secrets-577319a9-6bcb-4edc-b8de-f454db815c4d" in namespace "secrets-4779" to be "success or failure"
Feb 17 13:29:25.720: INFO: Pod "pod-secrets-577319a9-6bcb-4edc-b8de-f454db815c4d": Phase="Pending", Reason="", readiness=false. Elapsed: 20.087251ms
Feb 17 13:29:27.731: INFO: Pod "pod-secrets-577319a9-6bcb-4edc-b8de-f454db815c4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030229257s
Feb 17 13:29:29.737: INFO: Pod "pod-secrets-577319a9-6bcb-4edc-b8de-f454db815c4d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036496845s
Feb 17 13:29:31.760: INFO: Pod "pod-secrets-577319a9-6bcb-4edc-b8de-f454db815c4d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059179282s
Feb 17 13:29:33.774: INFO: Pod "pod-secrets-577319a9-6bcb-4edc-b8de-f454db815c4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.073523973s
STEP: Saw pod success
Feb 17 13:29:33.774: INFO: Pod "pod-secrets-577319a9-6bcb-4edc-b8de-f454db815c4d" satisfied condition "success or failure"
Feb 17 13:29:33.783: INFO: Trying to get logs from node iruya-node pod pod-secrets-577319a9-6bcb-4edc-b8de-f454db815c4d container secret-env-test: 
STEP: delete the pod
Feb 17 13:29:33.866: INFO: Waiting for pod pod-secrets-577319a9-6bcb-4edc-b8de-f454db815c4d to disappear
Feb 17 13:29:33.914: INFO: Pod pod-secrets-577319a9-6bcb-4edc-b8de-f454db815c4d no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:29:33.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4779" for this suite.
Feb 17 13:29:39.974: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:29:40.160: INFO: namespace secrets-4779 deletion completed in 6.228795714s

• [SLOW TEST:14.597 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:29:40.160: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 17 13:29:40.298: INFO: Waiting up to 5m0s for pod "pod-dab1df5e-de34-43ed-a8fd-9fc9f6b3acc1" in namespace "emptydir-4625" to be "success or failure"
Feb 17 13:29:40.322: INFO: Pod "pod-dab1df5e-de34-43ed-a8fd-9fc9f6b3acc1": Phase="Pending", Reason="", readiness=false. Elapsed: 23.274475ms
Feb 17 13:29:42.331: INFO: Pod "pod-dab1df5e-de34-43ed-a8fd-9fc9f6b3acc1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032985948s
Feb 17 13:29:44.339: INFO: Pod "pod-dab1df5e-de34-43ed-a8fd-9fc9f6b3acc1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040530243s
Feb 17 13:29:46.355: INFO: Pod "pod-dab1df5e-de34-43ed-a8fd-9fc9f6b3acc1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056663344s
Feb 17 13:29:48.561: INFO: Pod "pod-dab1df5e-de34-43ed-a8fd-9fc9f6b3acc1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.262879986s
Feb 17 13:29:50.575: INFO: Pod "pod-dab1df5e-de34-43ed-a8fd-9fc9f6b3acc1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.276783609s
STEP: Saw pod success
Feb 17 13:29:50.576: INFO: Pod "pod-dab1df5e-de34-43ed-a8fd-9fc9f6b3acc1" satisfied condition "success or failure"
Feb 17 13:29:50.580: INFO: Trying to get logs from node iruya-node pod pod-dab1df5e-de34-43ed-a8fd-9fc9f6b3acc1 container test-container: 
STEP: delete the pod
Feb 17 13:29:50.715: INFO: Waiting for pod pod-dab1df5e-de34-43ed-a8fd-9fc9f6b3acc1 to disappear
Feb 17 13:29:50.729: INFO: Pod pod-dab1df5e-de34-43ed-a8fd-9fc9f6b3acc1 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:29:50.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4625" for this suite.
Feb 17 13:29:56.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:29:56.901: INFO: namespace emptydir-4625 deletion completed in 6.161591564s

• [SLOW TEST:16.741 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:29:56.901: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb 17 13:30:07.640: INFO: Successfully updated pod "labelsupdate82972f9b-38ae-4d86-9782-dce45f602318"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:30:09.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4270" for this suite.
Feb 17 13:30:27.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:30:27.893: INFO: namespace downward-api-4270 deletion completed in 18.189652843s

• [SLOW TEST:30.991 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:30:27.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-2651
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-2651
STEP: Deleting pre-stop pod
Feb 17 13:30:55.065: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:30:55.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-2651" for this suite.
Feb 17 13:31:33.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:31:33.299: INFO: namespace prestop-2651 deletion completed in 38.165588341s

• [SLOW TEST:65.407 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:31:33.300: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 17 13:31:33.394: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8ea8b53e-a82a-4c83-8898-1305cb66cdb9" in namespace "projected-5973" to be "success or failure"
Feb 17 13:31:33.397: INFO: Pod "downwardapi-volume-8ea8b53e-a82a-4c83-8898-1305cb66cdb9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.511817ms
Feb 17 13:31:35.412: INFO: Pod "downwardapi-volume-8ea8b53e-a82a-4c83-8898-1305cb66cdb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018594838s
Feb 17 13:31:37.421: INFO: Pod "downwardapi-volume-8ea8b53e-a82a-4c83-8898-1305cb66cdb9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026805285s
Feb 17 13:31:39.430: INFO: Pod "downwardapi-volume-8ea8b53e-a82a-4c83-8898-1305cb66cdb9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03642782s
Feb 17 13:31:41.443: INFO: Pod "downwardapi-volume-8ea8b53e-a82a-4c83-8898-1305cb66cdb9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049276866s
Feb 17 13:31:43.453: INFO: Pod "downwardapi-volume-8ea8b53e-a82a-4c83-8898-1305cb66cdb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.059599393s
STEP: Saw pod success
Feb 17 13:31:43.454: INFO: Pod "downwardapi-volume-8ea8b53e-a82a-4c83-8898-1305cb66cdb9" satisfied condition "success or failure"
Feb 17 13:31:43.459: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-8ea8b53e-a82a-4c83-8898-1305cb66cdb9 container client-container: 
STEP: delete the pod
Feb 17 13:31:43.560: INFO: Waiting for pod downwardapi-volume-8ea8b53e-a82a-4c83-8898-1305cb66cdb9 to disappear
Feb 17 13:31:43.578: INFO: Pod downwardapi-volume-8ea8b53e-a82a-4c83-8898-1305cb66cdb9 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:31:43.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5973" for this suite.
Feb 17 13:31:49.701: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:31:49.894: INFO: namespace projected-5973 deletion completed in 6.309516623s

• [SLOW TEST:16.594 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:31:49.895: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 17 13:31:50.055: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Feb 17 13:31:50.187: INFO: Number of nodes with available pods: 0
Feb 17 13:31:50.187: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:31:51.701: INFO: Number of nodes with available pods: 0
Feb 17 13:31:51.701: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:31:52.198: INFO: Number of nodes with available pods: 0
Feb 17 13:31:52.198: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:31:53.658: INFO: Number of nodes with available pods: 0
Feb 17 13:31:53.658: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:31:54.204: INFO: Number of nodes with available pods: 0
Feb 17 13:31:54.204: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:31:55.204: INFO: Number of nodes with available pods: 0
Feb 17 13:31:55.204: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:31:57.151: INFO: Number of nodes with available pods: 0
Feb 17 13:31:57.151: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:31:57.733: INFO: Number of nodes with available pods: 0
Feb 17 13:31:57.733: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:31:58.945: INFO: Number of nodes with available pods: 0
Feb 17 13:31:58.946: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:31:59.224: INFO: Number of nodes with available pods: 0
Feb 17 13:31:59.224: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:32:00.202: INFO: Number of nodes with available pods: 0
Feb 17 13:32:00.202: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:32:01.206: INFO: Number of nodes with available pods: 2
Feb 17 13:32:01.206: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Feb 17 13:32:01.324: INFO: Wrong image for pod: daemon-set-4ck5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:01.324: INFO: Wrong image for pod: daemon-set-mrkwq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:02.344: INFO: Wrong image for pod: daemon-set-4ck5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:02.344: INFO: Wrong image for pod: daemon-set-mrkwq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:03.770: INFO: Wrong image for pod: daemon-set-4ck5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:03.770: INFO: Wrong image for pod: daemon-set-mrkwq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:04.342: INFO: Wrong image for pod: daemon-set-4ck5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:04.342: INFO: Wrong image for pod: daemon-set-mrkwq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:05.344: INFO: Wrong image for pod: daemon-set-4ck5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:05.345: INFO: Wrong image for pod: daemon-set-mrkwq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:06.345: INFO: Wrong image for pod: daemon-set-4ck5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:06.346: INFO: Wrong image for pod: daemon-set-mrkwq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:06.346: INFO: Pod daemon-set-mrkwq is not available
Feb 17 13:32:07.352: INFO: Wrong image for pod: daemon-set-4ck5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:07.352: INFO: Wrong image for pod: daemon-set-mrkwq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:07.352: INFO: Pod daemon-set-mrkwq is not available
Feb 17 13:32:08.345: INFO: Wrong image for pod: daemon-set-4ck5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:08.345: INFO: Wrong image for pod: daemon-set-mrkwq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:08.345: INFO: Pod daemon-set-mrkwq is not available
Feb 17 13:32:09.350: INFO: Wrong image for pod: daemon-set-4ck5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:09.350: INFO: Wrong image for pod: daemon-set-mrkwq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:09.350: INFO: Pod daemon-set-mrkwq is not available
Feb 17 13:32:10.344: INFO: Wrong image for pod: daemon-set-4ck5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:10.344: INFO: Wrong image for pod: daemon-set-mrkwq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:10.344: INFO: Pod daemon-set-mrkwq is not available
Feb 17 13:32:11.343: INFO: Wrong image for pod: daemon-set-4ck5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:11.343: INFO: Wrong image for pod: daemon-set-mrkwq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:11.343: INFO: Pod daemon-set-mrkwq is not available
Feb 17 13:32:12.344: INFO: Wrong image for pod: daemon-set-4ck5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:12.344: INFO: Wrong image for pod: daemon-set-mrkwq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:12.344: INFO: Pod daemon-set-mrkwq is not available
Feb 17 13:32:13.348: INFO: Wrong image for pod: daemon-set-4ck5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:13.348: INFO: Wrong image for pod: daemon-set-mrkwq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:13.348: INFO: Pod daemon-set-mrkwq is not available
Feb 17 13:32:14.344: INFO: Wrong image for pod: daemon-set-4ck5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:14.344: INFO: Wrong image for pod: daemon-set-mrkwq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:14.344: INFO: Pod daemon-set-mrkwq is not available
Feb 17 13:32:15.344: INFO: Wrong image for pod: daemon-set-4ck5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:15.344: INFO: Wrong image for pod: daemon-set-mrkwq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:15.344: INFO: Pod daemon-set-mrkwq is not available
Feb 17 13:32:16.345: INFO: Wrong image for pod: daemon-set-4ck5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:16.345: INFO: Wrong image for pod: daemon-set-mrkwq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:16.345: INFO: Pod daemon-set-mrkwq is not available
Feb 17 13:32:17.360: INFO: Wrong image for pod: daemon-set-4ck5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:17.360: INFO: Wrong image for pod: daemon-set-mrkwq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:17.360: INFO: Pod daemon-set-mrkwq is not available
Feb 17 13:32:18.349: INFO: Wrong image for pod: daemon-set-4ck5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:18.349: INFO: Pod daemon-set-7mtd9 is not available
Feb 17 13:32:19.348: INFO: Wrong image for pod: daemon-set-4ck5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:19.348: INFO: Pod daemon-set-7mtd9 is not available
Feb 17 13:32:20.344: INFO: Wrong image for pod: daemon-set-4ck5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:20.344: INFO: Pod daemon-set-7mtd9 is not available
Feb 17 13:32:21.345: INFO: Wrong image for pod: daemon-set-4ck5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:21.345: INFO: Pod daemon-set-7mtd9 is not available
Feb 17 13:32:22.731: INFO: Wrong image for pod: daemon-set-4ck5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:22.731: INFO: Pod daemon-set-7mtd9 is not available
Feb 17 13:32:23.667: INFO: Wrong image for pod: daemon-set-4ck5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:23.668: INFO: Pod daemon-set-7mtd9 is not available
Feb 17 13:32:24.344: INFO: Wrong image for pod: daemon-set-4ck5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:24.344: INFO: Pod daemon-set-7mtd9 is not available
Feb 17 13:32:25.343: INFO: Wrong image for pod: daemon-set-4ck5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:25.343: INFO: Pod daemon-set-7mtd9 is not available
Feb 17 13:32:26.344: INFO: Wrong image for pod: daemon-set-4ck5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:27.355: INFO: Wrong image for pod: daemon-set-4ck5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:28.350: INFO: Wrong image for pod: daemon-set-4ck5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:29.345: INFO: Wrong image for pod: daemon-set-4ck5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:30.524: INFO: Wrong image for pod: daemon-set-4ck5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:31.346: INFO: Wrong image for pod: daemon-set-4ck5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:31.346: INFO: Pod daemon-set-4ck5k is not available
Feb 17 13:32:32.345: INFO: Wrong image for pod: daemon-set-4ck5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:32.345: INFO: Pod daemon-set-4ck5k is not available
Feb 17 13:32:33.347: INFO: Wrong image for pod: daemon-set-4ck5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:33.347: INFO: Pod daemon-set-4ck5k is not available
Feb 17 13:32:34.345: INFO: Wrong image for pod: daemon-set-4ck5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:34.345: INFO: Pod daemon-set-4ck5k is not available
Feb 17 13:32:35.344: INFO: Wrong image for pod: daemon-set-4ck5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:35.344: INFO: Pod daemon-set-4ck5k is not available
Feb 17 13:32:36.348: INFO: Wrong image for pod: daemon-set-4ck5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 13:32:36.348: INFO: Pod daemon-set-4ck5k is not available
Feb 17 13:32:37.354: INFO: Pod daemon-set-mdw4d is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Feb 17 13:32:37.373: INFO: Number of nodes with available pods: 1
Feb 17 13:32:37.373: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:32:38.384: INFO: Number of nodes with available pods: 1
Feb 17 13:32:38.384: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:32:39.389: INFO: Number of nodes with available pods: 1
Feb 17 13:32:39.389: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:32:40.400: INFO: Number of nodes with available pods: 1
Feb 17 13:32:40.400: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:32:41.393: INFO: Number of nodes with available pods: 1
Feb 17 13:32:41.393: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:32:42.391: INFO: Number of nodes with available pods: 1
Feb 17 13:32:42.391: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:32:43.411: INFO: Number of nodes with available pods: 1
Feb 17 13:32:43.411: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:32:44.402: INFO: Number of nodes with available pods: 1
Feb 17 13:32:44.402: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:32:45.406: INFO: Number of nodes with available pods: 2
Feb 17 13:32:45.406: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3049, will wait for the garbage collector to delete the pods
Feb 17 13:32:45.508: INFO: Deleting DaemonSet.extensions daemon-set took: 13.399455ms
Feb 17 13:32:45.909: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.577963ms
Feb 17 13:32:57.916: INFO: Number of nodes with available pods: 0
Feb 17 13:32:57.916: INFO: Number of running nodes: 0, number of available pods: 0
Feb 17 13:32:57.919: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3049/daemonsets","resourceVersion":"24699831"},"items":null}

Feb 17 13:32:57.922: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3049/pods","resourceVersion":"24699831"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:32:57.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3049" for this suite.
Feb 17 13:33:05.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:33:06.107: INFO: namespace daemonsets-3049 deletion completed in 8.16761195s

• [SLOW TEST:76.212 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:33:06.107: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6619.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6619.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6619.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6619.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6619.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6619.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6619.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6619.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6619.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6619.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6619.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6619.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6619.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 153.210.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.210.153_udp@PTR;check="$$(dig +tcp +noall +answer +search 153.210.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.210.153_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6619.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6619.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6619.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6619.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6619.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6619.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6619.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6619.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6619.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6619.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6619.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6619.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6619.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 153.210.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.210.153_udp@PTR;check="$$(dig +tcp +noall +answer +search 153.210.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.210.153_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 17 13:33:18.866: INFO: Unable to read wheezy_udp@dns-test-service.dns-6619.svc.cluster.local from pod dns-6619/dns-test-611a868d-8bb3-44aa-9923-15a60a8bc979: the server could not find the requested resource (get pods dns-test-611a868d-8bb3-44aa-9923-15a60a8bc979)
Feb 17 13:33:18.895: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6619.svc.cluster.local from pod dns-6619/dns-test-611a868d-8bb3-44aa-9923-15a60a8bc979: the server could not find the requested resource (get pods dns-test-611a868d-8bb3-44aa-9923-15a60a8bc979)
Feb 17 13:33:18.902: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6619.svc.cluster.local from pod dns-6619/dns-test-611a868d-8bb3-44aa-9923-15a60a8bc979: the server could not find the requested resource (get pods dns-test-611a868d-8bb3-44aa-9923-15a60a8bc979)
Feb 17 13:33:18.906: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6619.svc.cluster.local from pod dns-6619/dns-test-611a868d-8bb3-44aa-9923-15a60a8bc979: the server could not find the requested resource (get pods dns-test-611a868d-8bb3-44aa-9923-15a60a8bc979)
Feb 17 13:33:18.912: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-6619.svc.cluster.local from pod dns-6619/dns-test-611a868d-8bb3-44aa-9923-15a60a8bc979: the server could not find the requested resource (get pods dns-test-611a868d-8bb3-44aa-9923-15a60a8bc979)
Feb 17 13:33:18.917: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-6619.svc.cluster.local from pod dns-6619/dns-test-611a868d-8bb3-44aa-9923-15a60a8bc979: the server could not find the requested resource (get pods dns-test-611a868d-8bb3-44aa-9923-15a60a8bc979)
Feb 17 13:33:18.928: INFO: Unable to read wheezy_udp@PodARecord from pod dns-6619/dns-test-611a868d-8bb3-44aa-9923-15a60a8bc979: the server could not find the requested resource (get pods dns-test-611a868d-8bb3-44aa-9923-15a60a8bc979)
Feb 17 13:33:18.937: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-6619/dns-test-611a868d-8bb3-44aa-9923-15a60a8bc979: the server could not find the requested resource (get pods dns-test-611a868d-8bb3-44aa-9923-15a60a8bc979)
Feb 17 13:33:18.941: INFO: Unable to read 10.106.210.153_udp@PTR from pod dns-6619/dns-test-611a868d-8bb3-44aa-9923-15a60a8bc979: the server could not find the requested resource (get pods dns-test-611a868d-8bb3-44aa-9923-15a60a8bc979)
Feb 17 13:33:18.949: INFO: Unable to read 10.106.210.153_tcp@PTR from pod dns-6619/dns-test-611a868d-8bb3-44aa-9923-15a60a8bc979: the server could not find the requested resource (get pods dns-test-611a868d-8bb3-44aa-9923-15a60a8bc979)
Feb 17 13:33:18.954: INFO: Unable to read jessie_udp@dns-test-service.dns-6619.svc.cluster.local from pod dns-6619/dns-test-611a868d-8bb3-44aa-9923-15a60a8bc979: the server could not find the requested resource (get pods dns-test-611a868d-8bb3-44aa-9923-15a60a8bc979)
Feb 17 13:33:18.957: INFO: Unable to read jessie_tcp@dns-test-service.dns-6619.svc.cluster.local from pod dns-6619/dns-test-611a868d-8bb3-44aa-9923-15a60a8bc979: the server could not find the requested resource (get pods dns-test-611a868d-8bb3-44aa-9923-15a60a8bc979)
Feb 17 13:33:18.962: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6619.svc.cluster.local from pod dns-6619/dns-test-611a868d-8bb3-44aa-9923-15a60a8bc979: the server could not find the requested resource (get pods dns-test-611a868d-8bb3-44aa-9923-15a60a8bc979)
Feb 17 13:33:18.967: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6619.svc.cluster.local from pod dns-6619/dns-test-611a868d-8bb3-44aa-9923-15a60a8bc979: the server could not find the requested resource (get pods dns-test-611a868d-8bb3-44aa-9923-15a60a8bc979)
Feb 17 13:33:18.972: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-6619.svc.cluster.local from pod dns-6619/dns-test-611a868d-8bb3-44aa-9923-15a60a8bc979: the server could not find the requested resource (get pods dns-test-611a868d-8bb3-44aa-9923-15a60a8bc979)
Feb 17 13:33:18.976: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-6619.svc.cluster.local from pod dns-6619/dns-test-611a868d-8bb3-44aa-9923-15a60a8bc979: the server could not find the requested resource (get pods dns-test-611a868d-8bb3-44aa-9923-15a60a8bc979)
Feb 17 13:33:18.980: INFO: Unable to read jessie_udp@PodARecord from pod dns-6619/dns-test-611a868d-8bb3-44aa-9923-15a60a8bc979: the server could not find the requested resource (get pods dns-test-611a868d-8bb3-44aa-9923-15a60a8bc979)
Feb 17 13:33:18.983: INFO: Unable to read jessie_tcp@PodARecord from pod dns-6619/dns-test-611a868d-8bb3-44aa-9923-15a60a8bc979: the server could not find the requested resource (get pods dns-test-611a868d-8bb3-44aa-9923-15a60a8bc979)
Feb 17 13:33:18.985: INFO: Unable to read 10.106.210.153_udp@PTR from pod dns-6619/dns-test-611a868d-8bb3-44aa-9923-15a60a8bc979: the server could not find the requested resource (get pods dns-test-611a868d-8bb3-44aa-9923-15a60a8bc979)
Feb 17 13:33:18.989: INFO: Unable to read 10.106.210.153_tcp@PTR from pod dns-6619/dns-test-611a868d-8bb3-44aa-9923-15a60a8bc979: the server could not find the requested resource (get pods dns-test-611a868d-8bb3-44aa-9923-15a60a8bc979)
Feb 17 13:33:18.989: INFO: Lookups using dns-6619/dns-test-611a868d-8bb3-44aa-9923-15a60a8bc979 failed for: [wheezy_udp@dns-test-service.dns-6619.svc.cluster.local wheezy_tcp@dns-test-service.dns-6619.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6619.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6619.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-6619.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-6619.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.106.210.153_udp@PTR 10.106.210.153_tcp@PTR jessie_udp@dns-test-service.dns-6619.svc.cluster.local jessie_tcp@dns-test-service.dns-6619.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6619.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6619.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-6619.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-6619.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.106.210.153_udp@PTR 10.106.210.153_tcp@PTR]

Feb 17 13:33:24.096: INFO: DNS probes using dns-6619/dns-test-611a868d-8bb3-44aa-9923-15a60a8bc979 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:33:24.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6619" for this suite.
Feb 17 13:33:30.453: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:33:30.564: INFO: namespace dns-6619 deletion completed in 6.176479036s

• [SLOW TEST:24.457 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:33:30.565: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-8716
I0217 13:33:30.622253       8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8716, replica count: 1
I0217 13:33:31.673199       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0217 13:33:32.673574       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0217 13:33:33.673892       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0217 13:33:34.674292       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0217 13:33:35.674633       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0217 13:33:36.674892       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0217 13:33:37.675180       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0217 13:33:38.675588       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 17 13:33:38.819: INFO: Created: latency-svc-kbmsf
Feb 17 13:33:38.850: INFO: Got endpoints: latency-svc-kbmsf [74.057609ms]
Feb 17 13:33:38.955: INFO: Created: latency-svc-jfkr4
Feb 17 13:33:39.102: INFO: Created: latency-svc-7kt2l
Feb 17 13:33:39.103: INFO: Got endpoints: latency-svc-jfkr4 [248.7463ms]
Feb 17 13:33:39.138: INFO: Created: latency-svc-p9rw7
Feb 17 13:33:39.138: INFO: Got endpoints: latency-svc-7kt2l [285.895044ms]
Feb 17 13:33:39.171: INFO: Got endpoints: latency-svc-p9rw7 [315.601589ms]
Feb 17 13:33:39.179: INFO: Created: latency-svc-ppl4m
Feb 17 13:33:39.293: INFO: Got endpoints: latency-svc-ppl4m [439.594252ms]
Feb 17 13:33:39.294: INFO: Created: latency-svc-qgr64
Feb 17 13:33:39.336: INFO: Got endpoints: latency-svc-qgr64 [480.742825ms]
Feb 17 13:33:39.337: INFO: Created: latency-svc-pgkst
Feb 17 13:33:39.371: INFO: Got endpoints: latency-svc-pgkst [515.435808ms]
Feb 17 13:33:39.479: INFO: Created: latency-svc-jfnrv
Feb 17 13:33:39.487: INFO: Got endpoints: latency-svc-jfnrv [631.168524ms]
Feb 17 13:33:39.544: INFO: Created: latency-svc-k2ng9
Feb 17 13:33:39.550: INFO: Got endpoints: latency-svc-k2ng9 [694.699873ms]
Feb 17 13:33:39.624: INFO: Created: latency-svc-9q7t4
Feb 17 13:33:39.648: INFO: Got endpoints: latency-svc-9q7t4 [792.138754ms]
Feb 17 13:33:39.693: INFO: Created: latency-svc-b6thm
Feb 17 13:33:39.707: INFO: Got endpoints: latency-svc-b6thm [851.07737ms]
Feb 17 13:33:39.815: INFO: Created: latency-svc-vjsf4
Feb 17 13:33:39.821: INFO: Got endpoints: latency-svc-vjsf4 [966.08495ms]
Feb 17 13:33:39.881: INFO: Created: latency-svc-768m8
Feb 17 13:33:39.972: INFO: Got endpoints: latency-svc-768m8 [1.116195664s]
Feb 17 13:33:40.011: INFO: Created: latency-svc-kftmb
Feb 17 13:33:40.037: INFO: Got endpoints: latency-svc-kftmb [1.180824906s]
Feb 17 13:33:40.079: INFO: Created: latency-svc-wbk7w
Feb 17 13:33:40.220: INFO: Got endpoints: latency-svc-wbk7w [1.363276459s]
Feb 17 13:33:40.249: INFO: Created: latency-svc-9wggt
Feb 17 13:33:40.257: INFO: Got endpoints: latency-svc-9wggt [1.400297332s]
Feb 17 13:33:40.381: INFO: Created: latency-svc-tkwfv
Feb 17 13:33:40.391: INFO: Got endpoints: latency-svc-tkwfv [1.287426922s]
Feb 17 13:33:40.441: INFO: Created: latency-svc-xhcfp
Feb 17 13:33:40.442: INFO: Got endpoints: latency-svc-xhcfp [1.303535837s]
Feb 17 13:33:40.546: INFO: Created: latency-svc-jzqvl
Feb 17 13:33:40.560: INFO: Got endpoints: latency-svc-jzqvl [1.389252366s]
Feb 17 13:33:40.620: INFO: Created: latency-svc-qglxn
Feb 17 13:33:40.638: INFO: Got endpoints: latency-svc-qglxn [1.344558616s]
Feb 17 13:33:40.743: INFO: Created: latency-svc-b2qrd
Feb 17 13:33:40.744: INFO: Got endpoints: latency-svc-b2qrd [1.407572105s]
Feb 17 13:33:40.834: INFO: Created: latency-svc-r5hqh
Feb 17 13:33:40.899: INFO: Got endpoints: latency-svc-r5hqh [1.527649038s]
Feb 17 13:33:40.927: INFO: Created: latency-svc-mhx6l
Feb 17 13:33:40.938: INFO: Got endpoints: latency-svc-mhx6l [1.451088285s]
Feb 17 13:33:40.991: INFO: Created: latency-svc-zhshd
Feb 17 13:33:41.132: INFO: Got endpoints: latency-svc-zhshd [1.581493607s]
Feb 17 13:33:41.142: INFO: Created: latency-svc-mg7h6
Feb 17 13:33:41.178: INFO: Got endpoints: latency-svc-mg7h6 [1.529957403s]
Feb 17 13:33:41.182: INFO: Created: latency-svc-6c6t7
Feb 17 13:33:41.192: INFO: Got endpoints: latency-svc-6c6t7 [1.484719588s]
Feb 17 13:33:41.376: INFO: Created: latency-svc-6hsg8
Feb 17 13:33:41.386: INFO: Got endpoints: latency-svc-6hsg8 [1.564314615s]
Feb 17 13:33:41.434: INFO: Created: latency-svc-5n9rg
Feb 17 13:33:41.453: INFO: Got endpoints: latency-svc-5n9rg [1.479978232s]
Feb 17 13:33:41.555: INFO: Created: latency-svc-mtdfs
Feb 17 13:33:41.566: INFO: Got endpoints: latency-svc-mtdfs [1.528542874s]
Feb 17 13:33:41.611: INFO: Created: latency-svc-fdnwp
Feb 17 13:33:41.614: INFO: Got endpoints: latency-svc-fdnwp [1.3939064s]
Feb 17 13:33:41.648: INFO: Created: latency-svc-njlrf
Feb 17 13:33:41.744: INFO: Got endpoints: latency-svc-njlrf [1.48701834s]
Feb 17 13:33:41.764: INFO: Created: latency-svc-k5tg7
Feb 17 13:33:41.774: INFO: Got endpoints: latency-svc-k5tg7 [1.383202675s]
Feb 17 13:33:41.901: INFO: Created: latency-svc-fnxcm
Feb 17 13:33:41.912: INFO: Got endpoints: latency-svc-fnxcm [1.470015335s]
Feb 17 13:33:41.986: INFO: Created: latency-svc-4xjfh
Feb 17 13:33:42.124: INFO: Got endpoints: latency-svc-4xjfh [1.563832059s]
Feb 17 13:33:42.148: INFO: Created: latency-svc-pbsqh
Feb 17 13:33:42.192: INFO: Got endpoints: latency-svc-pbsqh [1.553535513s]
Feb 17 13:33:42.271: INFO: Created: latency-svc-jzb4t
Feb 17 13:33:42.281: INFO: Got endpoints: latency-svc-jzb4t [1.536127023s]
Feb 17 13:33:42.330: INFO: Created: latency-svc-c5xh7
Feb 17 13:33:42.338: INFO: Got endpoints: latency-svc-c5xh7 [1.438247518s]
Feb 17 13:33:42.452: INFO: Created: latency-svc-4pcpv
Feb 17 13:33:42.462: INFO: Got endpoints: latency-svc-4pcpv [1.523727951s]
Feb 17 13:33:42.506: INFO: Created: latency-svc-z2qnf
Feb 17 13:33:42.521: INFO: Got endpoints: latency-svc-z2qnf [1.388527912s]
Feb 17 13:33:42.621: INFO: Created: latency-svc-v7zc4
Feb 17 13:33:42.643: INFO: Got endpoints: latency-svc-v7zc4 [1.464410573s]
Feb 17 13:33:42.673: INFO: Created: latency-svc-ddznb
Feb 17 13:33:42.698: INFO: Got endpoints: latency-svc-ddznb [176.408124ms]
Feb 17 13:33:42.833: INFO: Created: latency-svc-kdbsq
Feb 17 13:33:42.886: INFO: Got endpoints: latency-svc-kdbsq [1.693774565s]
Feb 17 13:33:42.898: INFO: Created: latency-svc-9gfrr
Feb 17 13:33:42.975: INFO: Got endpoints: latency-svc-9gfrr [1.588947974s]
Feb 17 13:33:42.979: INFO: Created: latency-svc-xlzx6
Feb 17 13:33:43.010: INFO: Got endpoints: latency-svc-xlzx6 [1.55694688s]
Feb 17 13:33:43.036: INFO: Created: latency-svc-qzx8d
Feb 17 13:33:43.056: INFO: Got endpoints: latency-svc-qzx8d [1.489873427s]
Feb 17 13:33:43.150: INFO: Created: latency-svc-2dxtf
Feb 17 13:33:43.174: INFO: Got endpoints: latency-svc-2dxtf [1.560104703s]
Feb 17 13:33:43.208: INFO: Created: latency-svc-8fd58
Feb 17 13:33:43.238: INFO: Got endpoints: latency-svc-8fd58 [1.494059598s]
Feb 17 13:33:43.339: INFO: Created: latency-svc-78gp2
Feb 17 13:33:43.369: INFO: Got endpoints: latency-svc-78gp2 [1.594358351s]
Feb 17 13:33:43.414: INFO: Created: latency-svc-8s8hz
Feb 17 13:33:43.551: INFO: Got endpoints: latency-svc-8s8hz [1.638774222s]
Feb 17 13:33:43.633: INFO: Created: latency-svc-clzcw
Feb 17 13:33:43.763: INFO: Got endpoints: latency-svc-clzcw [1.637888952s]
Feb 17 13:33:43.784: INFO: Created: latency-svc-lbx7w
Feb 17 13:33:43.840: INFO: Got endpoints: latency-svc-lbx7w [1.648537242s]
Feb 17 13:33:43.934: INFO: Created: latency-svc-s2p42
Feb 17 13:33:43.959: INFO: Got endpoints: latency-svc-s2p42 [1.677413457s]
Feb 17 13:33:44.003: INFO: Created: latency-svc-jl674
Feb 17 13:33:44.003: INFO: Got endpoints: latency-svc-jl674 [1.665295253s]
Feb 17 13:33:44.100: INFO: Created: latency-svc-k7bt2
Feb 17 13:33:44.125: INFO: Got endpoints: latency-svc-k7bt2 [1.662126578s]
Feb 17 13:33:44.139: INFO: Created: latency-svc-qcx4z
Feb 17 13:33:44.152: INFO: Got endpoints: latency-svc-qcx4z [1.508409018s]
Feb 17 13:33:44.175: INFO: Created: latency-svc-5l6jm
Feb 17 13:33:44.192: INFO: Got endpoints: latency-svc-5l6jm [1.493939179s]
Feb 17 13:33:44.296: INFO: Created: latency-svc-d2pv4
Feb 17 13:33:44.306: INFO: Got endpoints: latency-svc-d2pv4 [1.419828402s]
Feb 17 13:33:44.386: INFO: Created: latency-svc-7xfhw
Feb 17 13:33:44.459: INFO: Got endpoints: latency-svc-7xfhw [1.483360273s]
Feb 17 13:33:44.479: INFO: Created: latency-svc-svvrh
Feb 17 13:33:44.484: INFO: Got endpoints: latency-svc-svvrh [1.474172425s]
Feb 17 13:33:44.523: INFO: Created: latency-svc-79bx7
Feb 17 13:33:44.529: INFO: Got endpoints: latency-svc-79bx7 [1.47350371s]
Feb 17 13:33:44.600: INFO: Created: latency-svc-74cjl
Feb 17 13:33:44.608: INFO: Got endpoints: latency-svc-74cjl [1.433641552s]
Feb 17 13:33:44.660: INFO: Created: latency-svc-bpsx2
Feb 17 13:33:44.669: INFO: Got endpoints: latency-svc-bpsx2 [1.430434417s]
Feb 17 13:33:44.715: INFO: Created: latency-svc-5fzqd
Feb 17 13:33:44.767: INFO: Got endpoints: latency-svc-5fzqd [1.397929565s]
Feb 17 13:33:44.809: INFO: Created: latency-svc-d9q6f
Feb 17 13:33:44.811: INFO: Got endpoints: latency-svc-d9q6f [1.260642779s]
Feb 17 13:33:44.860: INFO: Created: latency-svc-9gfmv
Feb 17 13:33:44.937: INFO: Got endpoints: latency-svc-9gfmv [1.173490338s]
Feb 17 13:33:44.944: INFO: Created: latency-svc-zd4xw
Feb 17 13:33:44.955: INFO: Got endpoints: latency-svc-zd4xw [1.114153859s]
Feb 17 13:33:44.977: INFO: Created: latency-svc-pvddr
Feb 17 13:33:44.986: INFO: Got endpoints: latency-svc-pvddr [1.02731901s]
Feb 17 13:33:45.017: INFO: Created: latency-svc-z22zr
Feb 17 13:33:45.025: INFO: Got endpoints: latency-svc-z22zr [1.021820914s]
Feb 17 13:33:45.099: INFO: Created: latency-svc-7ccxw
Feb 17 13:33:45.107: INFO: Got endpoints: latency-svc-7ccxw [981.880104ms]
Feb 17 13:33:45.166: INFO: Created: latency-svc-q96d2
Feb 17 13:33:45.175: INFO: Got endpoints: latency-svc-q96d2 [1.022612833s]
Feb 17 13:33:45.265: INFO: Created: latency-svc-2lb5c
Feb 17 13:33:45.271: INFO: Got endpoints: latency-svc-2lb5c [1.078185816s]
Feb 17 13:33:45.292: INFO: Created: latency-svc-4xk9c
Feb 17 13:33:45.295: INFO: Got endpoints: latency-svc-4xk9c [988.695052ms]
Feb 17 13:33:45.335: INFO: Created: latency-svc-vsj6x
Feb 17 13:33:45.346: INFO: Got endpoints: latency-svc-vsj6x [886.613142ms]
Feb 17 13:33:45.416: INFO: Created: latency-svc-6dtdd
Feb 17 13:33:45.434: INFO: Got endpoints: latency-svc-6dtdd [949.897567ms]
Feb 17 13:33:45.475: INFO: Created: latency-svc-2kl8z
Feb 17 13:33:45.475: INFO: Got endpoints: latency-svc-2kl8z [945.350854ms]
Feb 17 13:33:45.514: INFO: Created: latency-svc-gn8c4
Feb 17 13:33:45.572: INFO: Got endpoints: latency-svc-gn8c4 [963.871143ms]
Feb 17 13:33:45.599: INFO: Created: latency-svc-g8cf5
Feb 17 13:33:45.599: INFO: Got endpoints: latency-svc-g8cf5 [929.879769ms]
Feb 17 13:33:45.629: INFO: Created: latency-svc-m6nnf
Feb 17 13:33:45.640: INFO: Got endpoints: latency-svc-m6nnf [872.700378ms]
Feb 17 13:33:45.669: INFO: Created: latency-svc-gnr4p
Feb 17 13:33:45.700: INFO: Got endpoints: latency-svc-gnr4p [888.70569ms]
Feb 17 13:33:45.719: INFO: Created: latency-svc-6kxhq
Feb 17 13:33:45.724: INFO: Got endpoints: latency-svc-6kxhq [787.026587ms]
Feb 17 13:33:45.804: INFO: Created: latency-svc-2bjv4
Feb 17 13:33:45.901: INFO: Got endpoints: latency-svc-2bjv4 [946.392093ms]
Feb 17 13:33:45.955: INFO: Created: latency-svc-7k9dl
Feb 17 13:33:45.956: INFO: Got endpoints: latency-svc-7k9dl [969.514023ms]
Feb 17 13:33:45.999: INFO: Created: latency-svc-bg6t6
Feb 17 13:33:46.046: INFO: Got endpoints: latency-svc-bg6t6 [1.020834856s]
Feb 17 13:33:46.059: INFO: Created: latency-svc-sxchg
Feb 17 13:33:46.078: INFO: Got endpoints: latency-svc-sxchg [971.404917ms]
Feb 17 13:33:46.110: INFO: Created: latency-svc-pm22m
Feb 17 13:33:46.120: INFO: Got endpoints: latency-svc-pm22m [945.079122ms]
Feb 17 13:33:46.221: INFO: Created: latency-svc-7pzhm
Feb 17 13:33:46.227: INFO: Got endpoints: latency-svc-7pzhm [955.957866ms]
Feb 17 13:33:46.275: INFO: Created: latency-svc-wh6xg
Feb 17 13:33:46.284: INFO: Got endpoints: latency-svc-wh6xg [988.045604ms]
Feb 17 13:33:46.382: INFO: Created: latency-svc-5bqqg
Feb 17 13:33:46.389: INFO: Got endpoints: latency-svc-5bqqg [1.043295592s]
Feb 17 13:33:46.422: INFO: Created: latency-svc-hvmsj
Feb 17 13:33:46.430: INFO: Got endpoints: latency-svc-hvmsj [995.812115ms]
Feb 17 13:33:46.461: INFO: Created: latency-svc-867n4
Feb 17 13:33:46.519: INFO: Got endpoints: latency-svc-867n4 [1.043466354s]
Feb 17 13:33:46.536: INFO: Created: latency-svc-xr84h
Feb 17 13:33:46.552: INFO: Got endpoints: latency-svc-xr84h [979.504994ms]
Feb 17 13:33:46.589: INFO: Created: latency-svc-cl5l4
Feb 17 13:33:46.695: INFO: Got endpoints: latency-svc-cl5l4 [1.095881189s]
Feb 17 13:33:46.705: INFO: Created: latency-svc-ghs7s
Feb 17 13:33:46.707: INFO: Got endpoints: latency-svc-ghs7s [1.066775653s]
Feb 17 13:33:46.737: INFO: Created: latency-svc-vvk4x
Feb 17 13:33:46.744: INFO: Got endpoints: latency-svc-vvk4x [1.043382765s]
Feb 17 13:33:46.772: INFO: Created: latency-svc-n8w9g
Feb 17 13:33:46.783: INFO: Got endpoints: latency-svc-n8w9g [1.05862563s]
Feb 17 13:33:46.876: INFO: Created: latency-svc-xsv58
Feb 17 13:33:46.887: INFO: Got endpoints: latency-svc-xsv58 [985.775938ms]
Feb 17 13:33:46.927: INFO: Created: latency-svc-5gztd
Feb 17 13:33:46.941: INFO: Got endpoints: latency-svc-5gztd [984.789021ms]
Feb 17 13:33:47.069: INFO: Created: latency-svc-sk8zb
Feb 17 13:33:47.076: INFO: Got endpoints: latency-svc-sk8zb [1.029447991s]
Feb 17 13:33:47.105: INFO: Created: latency-svc-hjm7p
Feb 17 13:33:47.138: INFO: Got endpoints: latency-svc-hjm7p [1.059069963s]
Feb 17 13:33:47.140: INFO: Created: latency-svc-bbwv5
Feb 17 13:33:47.151: INFO: Got endpoints: latency-svc-bbwv5 [1.030711776s]
Feb 17 13:33:47.319: INFO: Created: latency-svc-hr7sq
Feb 17 13:33:47.331: INFO: Got endpoints: latency-svc-hr7sq [1.103796602s]
Feb 17 13:33:47.388: INFO: Created: latency-svc-plnnd
Feb 17 13:33:47.395: INFO: Got endpoints: latency-svc-plnnd [1.11162115s]
Feb 17 13:33:47.542: INFO: Created: latency-svc-8slxq
Feb 17 13:33:47.570: INFO: Got endpoints: latency-svc-8slxq [1.180918025s]
Feb 17 13:33:47.712: INFO: Created: latency-svc-mgvpp
Feb 17 13:33:47.720: INFO: Got endpoints: latency-svc-mgvpp [1.289769146s]
Feb 17 13:33:47.754: INFO: Created: latency-svc-4dn47
Feb 17 13:33:47.764: INFO: Got endpoints: latency-svc-4dn47 [1.244842271s]
Feb 17 13:33:47.802: INFO: Created: latency-svc-rrhzt
Feb 17 13:33:47.876: INFO: Got endpoints: latency-svc-rrhzt [1.324336481s]
Feb 17 13:33:47.904: INFO: Created: latency-svc-lnld4
Feb 17 13:33:47.909: INFO: Got endpoints: latency-svc-lnld4 [1.213397812s]
Feb 17 13:33:47.967: INFO: Created: latency-svc-xrwqq
Feb 17 13:33:48.055: INFO: Got endpoints: latency-svc-xrwqq [1.34773725s]
Feb 17 13:33:48.073: INFO: Created: latency-svc-xp5jk
Feb 17 13:33:48.080: INFO: Got endpoints: latency-svc-xp5jk [1.336344885s]
Feb 17 13:33:48.125: INFO: Created: latency-svc-npxpx
Feb 17 13:33:48.131: INFO: Got endpoints: latency-svc-npxpx [1.347759114s]
Feb 17 13:33:48.250: INFO: Created: latency-svc-5hh74
Feb 17 13:33:48.264: INFO: Got endpoints: latency-svc-5hh74 [1.376701877s]
Feb 17 13:33:48.319: INFO: Created: latency-svc-7bc6d
Feb 17 13:33:48.338: INFO: Got endpoints: latency-svc-7bc6d [1.397080584s]
Feb 17 13:33:48.464: INFO: Created: latency-svc-fjtkz
Feb 17 13:33:48.468: INFO: Got endpoints: latency-svc-fjtkz [1.391109067s]
Feb 17 13:33:48.533: INFO: Created: latency-svc-k6trq
Feb 17 13:33:48.621: INFO: Got endpoints: latency-svc-k6trq [1.483504036s]
Feb 17 13:33:48.631: INFO: Created: latency-svc-qj622
Feb 17 13:33:48.644: INFO: Got endpoints: latency-svc-qj622 [1.493136215s]
Feb 17 13:33:48.686: INFO: Created: latency-svc-t5rlz
Feb 17 13:33:48.828: INFO: Got endpoints: latency-svc-t5rlz [1.497525925s]
Feb 17 13:33:48.830: INFO: Created: latency-svc-mm8pm
Feb 17 13:33:48.839: INFO: Got endpoints: latency-svc-mm8pm [1.444006178s]
Feb 17 13:33:48.900: INFO: Created: latency-svc-6l4hj
Feb 17 13:33:48.911: INFO: Got endpoints: latency-svc-6l4hj [1.340595036s]
Feb 17 13:33:49.092: INFO: Created: latency-svc-zkgck
Feb 17 13:33:49.101: INFO: Got endpoints: latency-svc-zkgck [1.380764833s]
Feb 17 13:33:49.251: INFO: Created: latency-svc-qddmp
Feb 17 13:33:49.251: INFO: Got endpoints: latency-svc-qddmp [1.486765043s]
Feb 17 13:33:49.315: INFO: Created: latency-svc-2j7cj
Feb 17 13:33:49.326: INFO: Got endpoints: latency-svc-2j7cj [1.449668146s]
Feb 17 13:33:49.405: INFO: Created: latency-svc-8b762
Feb 17 13:33:49.407: INFO: Got endpoints: latency-svc-8b762 [1.498416045s]
Feb 17 13:33:49.441: INFO: Created: latency-svc-r4d9q
Feb 17 13:33:49.458: INFO: Got endpoints: latency-svc-r4d9q [1.403484831s]
Feb 17 13:33:49.481: INFO: Created: latency-svc-xsbtp
Feb 17 13:33:49.556: INFO: Got endpoints: latency-svc-xsbtp [1.475458826s]
Feb 17 13:33:49.557: INFO: Created: latency-svc-79m6n
Feb 17 13:33:49.564: INFO: Got endpoints: latency-svc-79m6n [1.43322248s]
Feb 17 13:33:49.596: INFO: Created: latency-svc-jhlkp
Feb 17 13:33:49.600: INFO: Got endpoints: latency-svc-jhlkp [1.335951174s]
Feb 17 13:33:49.635: INFO: Created: latency-svc-wsxt6
Feb 17 13:33:49.636: INFO: Got endpoints: latency-svc-wsxt6 [1.297738791s]
Feb 17 13:33:49.743: INFO: Created: latency-svc-2hjnk
Feb 17 13:33:49.758: INFO: Got endpoints: latency-svc-2hjnk [1.290206508s]
Feb 17 13:33:49.807: INFO: Created: latency-svc-lbl89
Feb 17 13:33:49.818: INFO: Got endpoints: latency-svc-lbl89 [1.196616543s]
Feb 17 13:33:49.924: INFO: Created: latency-svc-klxhh
Feb 17 13:33:49.927: INFO: Got endpoints: latency-svc-klxhh [1.283103519s]
Feb 17 13:33:49.963: INFO: Created: latency-svc-sxw8f
Feb 17 13:33:49.974: INFO: Got endpoints: latency-svc-sxw8f [1.145069415s]
Feb 17 13:33:50.092: INFO: Created: latency-svc-ldxx5
Feb 17 13:33:50.092: INFO: Got endpoints: latency-svc-ldxx5 [1.251982211s]
Feb 17 13:33:50.139: INFO: Created: latency-svc-rgk2d
Feb 17 13:33:50.145: INFO: Got endpoints: latency-svc-rgk2d [1.233872901s]
Feb 17 13:33:50.260: INFO: Created: latency-svc-hnztr
Feb 17 13:33:50.269: INFO: Got endpoints: latency-svc-hnztr [1.167940161s]
Feb 17 13:33:50.335: INFO: Created: latency-svc-8jk5x
Feb 17 13:33:50.485: INFO: Got endpoints: latency-svc-8jk5x [1.234091896s]
Feb 17 13:33:50.495: INFO: Created: latency-svc-x5tbp
Feb 17 13:33:50.504: INFO: Got endpoints: latency-svc-x5tbp [1.178268833s]
Feb 17 13:33:50.549: INFO: Created: latency-svc-ccjrt
Feb 17 13:33:50.569: INFO: Got endpoints: latency-svc-ccjrt [1.161728368s]
Feb 17 13:33:50.669: INFO: Created: latency-svc-tn6mg
Feb 17 13:33:50.726: INFO: Got endpoints: latency-svc-tn6mg [1.267210612s]
Feb 17 13:33:50.729: INFO: Created: latency-svc-f9zf4
Feb 17 13:33:50.750: INFO: Got endpoints: latency-svc-f9zf4 [1.194154204s]
Feb 17 13:33:50.877: INFO: Created: latency-svc-wbhp8
Feb 17 13:33:50.890: INFO: Got endpoints: latency-svc-wbhp8 [1.326062067s]
Feb 17 13:33:50.925: INFO: Created: latency-svc-4wj45
Feb 17 13:33:50.932: INFO: Got endpoints: latency-svc-4wj45 [1.331571935s]
Feb 17 13:33:50.969: INFO: Created: latency-svc-n25jl
Feb 17 13:33:51.036: INFO: Got endpoints: latency-svc-n25jl [1.399854908s]
Feb 17 13:33:51.061: INFO: Created: latency-svc-88cfp
Feb 17 13:33:51.069: INFO: Got endpoints: latency-svc-88cfp [1.310569697s]
Feb 17 13:33:51.108: INFO: Created: latency-svc-8lx96
Feb 17 13:33:51.117: INFO: Got endpoints: latency-svc-8lx96 [1.298943906s]
Feb 17 13:33:51.241: INFO: Created: latency-svc-ccjzh
Feb 17 13:33:51.247: INFO: Got endpoints: latency-svc-ccjzh [1.319619696s]
Feb 17 13:33:51.311: INFO: Created: latency-svc-xmxdd
Feb 17 13:33:51.353: INFO: Got endpoints: latency-svc-xmxdd [1.379218964s]
Feb 17 13:33:51.555: INFO: Created: latency-svc-xqzf7
Feb 17 13:33:51.574: INFO: Got endpoints: latency-svc-xqzf7 [1.482320434s]
Feb 17 13:33:51.666: INFO: Created: latency-svc-rfj8r
Feb 17 13:33:51.667: INFO: Got endpoints: latency-svc-rfj8r [1.521635942s]
Feb 17 13:33:51.739: INFO: Created: latency-svc-qhhg4
Feb 17 13:33:51.744: INFO: Got endpoints: latency-svc-qhhg4 [1.474844615s]
Feb 17 13:33:51.876: INFO: Created: latency-svc-m6vtw
Feb 17 13:33:51.885: INFO: Got endpoints: latency-svc-m6vtw [1.399485088s]
Feb 17 13:33:51.921: INFO: Created: latency-svc-gwq2c
Feb 17 13:33:51.929: INFO: Got endpoints: latency-svc-gwq2c [1.423951109s]
Feb 17 13:33:52.034: INFO: Created: latency-svc-xpjbr
Feb 17 13:33:52.040: INFO: Got endpoints: latency-svc-xpjbr [1.471303289s]
Feb 17 13:33:52.079: INFO: Created: latency-svc-dhsbs
Feb 17 13:33:52.115: INFO: Created: latency-svc-k6m8b
Feb 17 13:33:52.116: INFO: Got endpoints: latency-svc-dhsbs [1.389848719s]
Feb 17 13:33:52.124: INFO: Got endpoints: latency-svc-k6m8b [1.373760574s]
Feb 17 13:33:52.210: INFO: Created: latency-svc-shmd8
Feb 17 13:33:52.248: INFO: Got endpoints: latency-svc-shmd8 [1.357300839s]
Feb 17 13:33:52.273: INFO: Created: latency-svc-2mmqm
Feb 17 13:33:52.282: INFO: Got endpoints: latency-svc-2mmqm [1.350069046s]
Feb 17 13:33:52.420: INFO: Created: latency-svc-j9865
Feb 17 13:33:52.427: INFO: Got endpoints: latency-svc-j9865 [1.390096788s]
Feb 17 13:33:52.500: INFO: Created: latency-svc-mjr2n
Feb 17 13:33:52.614: INFO: Got endpoints: latency-svc-mjr2n [1.544731377s]
Feb 17 13:33:52.616: INFO: Created: latency-svc-wdf72
Feb 17 13:33:52.660: INFO: Got endpoints: latency-svc-wdf72 [1.542318392s]
Feb 17 13:33:52.854: INFO: Created: latency-svc-4jchm
Feb 17 13:33:52.863: INFO: Got endpoints: latency-svc-4jchm [1.615973048s]
Feb 17 13:33:52.952: INFO: Created: latency-svc-pz2w8
Feb 17 13:33:53.126: INFO: Got endpoints: latency-svc-pz2w8 [1.772935867s]
Feb 17 13:33:53.192: INFO: Created: latency-svc-qcfc5
Feb 17 13:33:53.206: INFO: Got endpoints: latency-svc-qcfc5 [1.631242442s]
Feb 17 13:33:53.388: INFO: Created: latency-svc-7vk9z
Feb 17 13:33:53.441: INFO: Got endpoints: latency-svc-7vk9z [1.773868682s]
Feb 17 13:33:53.577: INFO: Created: latency-svc-wfkp8
Feb 17 13:33:53.645: INFO: Got endpoints: latency-svc-wfkp8 [1.900988748s]
Feb 17 13:33:53.657: INFO: Created: latency-svc-wdssc
Feb 17 13:33:53.663: INFO: Got endpoints: latency-svc-wdssc [1.777851462s]
Feb 17 13:33:53.823: INFO: Created: latency-svc-qphcb
Feb 17 13:33:53.854: INFO: Got endpoints: latency-svc-qphcb [1.925743879s]
Feb 17 13:33:53.996: INFO: Created: latency-svc-qwtkf
Feb 17 13:33:54.005: INFO: Got endpoints: latency-svc-qwtkf [1.964913343s]
Feb 17 13:33:54.171: INFO: Created: latency-svc-tgzxs
Feb 17 13:33:54.184: INFO: Got endpoints: latency-svc-tgzxs [2.067546009s]
Feb 17 13:33:54.325: INFO: Created: latency-svc-tzq5j
Feb 17 13:33:54.374: INFO: Got endpoints: latency-svc-tzq5j [2.249897195s]
Feb 17 13:33:54.377: INFO: Created: latency-svc-952bp
Feb 17 13:33:54.390: INFO: Got endpoints: latency-svc-952bp [2.141542007s]
Feb 17 13:33:54.531: INFO: Created: latency-svc-b8nhm
Feb 17 13:33:54.562: INFO: Got endpoints: latency-svc-b8nhm [2.279622023s]
Feb 17 13:33:54.617: INFO: Created: latency-svc-26gkl
Feb 17 13:33:54.636: INFO: Got endpoints: latency-svc-26gkl [2.208449141s]
Feb 17 13:33:54.675: INFO: Created: latency-svc-fs6cs
Feb 17 13:33:54.704: INFO: Got endpoints: latency-svc-fs6cs [2.09029574s]
Feb 17 13:33:54.810: INFO: Created: latency-svc-xjg5g
Feb 17 13:33:54.853: INFO: Got endpoints: latency-svc-xjg5g [2.19296286s]
Feb 17 13:33:54.861: INFO: Created: latency-svc-bw8gb
Feb 17 13:33:54.867: INFO: Got endpoints: latency-svc-bw8gb [2.003316465s]
Feb 17 13:33:54.950: INFO: Created: latency-svc-zcm9z
Feb 17 13:33:54.982: INFO: Got endpoints: latency-svc-zcm9z [1.854948071s]
Feb 17 13:33:54.989: INFO: Created: latency-svc-29cx8
Feb 17 13:33:54.993: INFO: Got endpoints: latency-svc-29cx8 [1.787332507s]
Feb 17 13:33:55.036: INFO: Created: latency-svc-ht4zn
Feb 17 13:33:55.079: INFO: Got endpoints: latency-svc-ht4zn [1.637377968s]
Feb 17 13:33:55.108: INFO: Created: latency-svc-xfjgc
Feb 17 13:33:55.123: INFO: Got endpoints: latency-svc-xfjgc [1.476519175s]
Feb 17 13:33:55.167: INFO: Created: latency-svc-schb7
Feb 17 13:33:55.231: INFO: Got endpoints: latency-svc-schb7 [1.568669404s]
Feb 17 13:33:55.233: INFO: Created: latency-svc-868c8
Feb 17 13:33:55.239: INFO: Got endpoints: latency-svc-868c8 [1.383972543s]
Feb 17 13:33:55.282: INFO: Created: latency-svc-c24n5
Feb 17 13:33:55.290: INFO: Got endpoints: latency-svc-c24n5 [1.284560101s]
Feb 17 13:33:55.321: INFO: Created: latency-svc-tcm9f
Feb 17 13:33:55.371: INFO: Got endpoints: latency-svc-tcm9f [1.186982919s]
Feb 17 13:33:55.397: INFO: Created: latency-svc-cxqbv
Feb 17 13:33:55.407: INFO: Got endpoints: latency-svc-cxqbv [1.032519131s]
Feb 17 13:33:55.548: INFO: Created: latency-svc-4mrx2
Feb 17 13:33:55.555: INFO: Got endpoints: latency-svc-4mrx2 [1.165221266s]
Feb 17 13:33:55.587: INFO: Created: latency-svc-zngp4
Feb 17 13:33:55.588: INFO: Got endpoints: latency-svc-zngp4 [1.026021212s]
Feb 17 13:33:55.612: INFO: Created: latency-svc-q8jhg
Feb 17 13:33:55.619: INFO: Got endpoints: latency-svc-q8jhg [983.159081ms]
Feb 17 13:33:55.697: INFO: Created: latency-svc-gz4lr
Feb 17 13:33:55.715: INFO: Got endpoints: latency-svc-gz4lr [1.010169298s]
Feb 17 13:33:55.738: INFO: Created: latency-svc-lbn86
Feb 17 13:33:55.740: INFO: Got endpoints: latency-svc-lbn86 [886.978923ms]
Feb 17 13:33:56.534: INFO: Created: latency-svc-9j6kh
Feb 17 13:33:56.569: INFO: Got endpoints: latency-svc-9j6kh [1.702106622s]
Feb 17 13:33:56.617: INFO: Created: latency-svc-dtlfk
Feb 17 13:33:56.693: INFO: Got endpoints: latency-svc-dtlfk [1.710753094s]
Feb 17 13:33:56.706: INFO: Created: latency-svc-nzcgn
Feb 17 13:33:56.721: INFO: Got endpoints: latency-svc-nzcgn [1.727640619s]
Feb 17 13:33:56.778: INFO: Created: latency-svc-2sl6c
Feb 17 13:33:56.887: INFO: Got endpoints: latency-svc-2sl6c [1.808323545s]
Feb 17 13:33:56.928: INFO: Created: latency-svc-n7bfj
Feb 17 13:33:56.953: INFO: Got endpoints: latency-svc-n7bfj [1.830206094s]
Feb 17 13:33:57.039: INFO: Created: latency-svc-gqz2h
Feb 17 13:33:57.048: INFO: Got endpoints: latency-svc-gqz2h [1.816195345s]
Feb 17 13:33:57.073: INFO: Created: latency-svc-r5wc9
Feb 17 13:33:57.080: INFO: Got endpoints: latency-svc-r5wc9 [1.840453225s]
Feb 17 13:33:57.102: INFO: Created: latency-svc-7sskj
Feb 17 13:33:57.123: INFO: Got endpoints: latency-svc-7sskj [1.83281528s]
Feb 17 13:33:57.127: INFO: Created: latency-svc-qdqsf
Feb 17 13:33:57.206: INFO: Got endpoints: latency-svc-qdqsf [1.834179381s]
Feb 17 13:33:57.214: INFO: Created: latency-svc-vwsfn
Feb 17 13:33:57.223: INFO: Got endpoints: latency-svc-vwsfn [1.815577549s]
Feb 17 13:33:57.257: INFO: Created: latency-svc-fs7n9
Feb 17 13:33:57.264: INFO: Got endpoints: latency-svc-fs7n9 [1.708681687s]
Feb 17 13:33:57.292: INFO: Created: latency-svc-ccpk9
Feb 17 13:33:57.299: INFO: Got endpoints: latency-svc-ccpk9 [1.710217563s]
Feb 17 13:33:57.299: INFO: Latencies: [176.408124ms 248.7463ms 285.895044ms 315.601589ms 439.594252ms 480.742825ms 515.435808ms 631.168524ms 694.699873ms 787.026587ms 792.138754ms 851.07737ms 872.700378ms 886.613142ms 886.978923ms 888.70569ms 929.879769ms 945.079122ms 945.350854ms 946.392093ms 949.897567ms 955.957866ms 963.871143ms 966.08495ms 969.514023ms 971.404917ms 979.504994ms 981.880104ms 983.159081ms 984.789021ms 985.775938ms 988.045604ms 988.695052ms 995.812115ms 1.010169298s 1.020834856s 1.021820914s 1.022612833s 1.026021212s 1.02731901s 1.029447991s 1.030711776s 1.032519131s 1.043295592s 1.043382765s 1.043466354s 1.05862563s 1.059069963s 1.066775653s 1.078185816s 1.095881189s 1.103796602s 1.11162115s 1.114153859s 1.116195664s 1.145069415s 1.161728368s 1.165221266s 1.167940161s 1.173490338s 1.178268833s 1.180824906s 1.180918025s 1.186982919s 1.194154204s 1.196616543s 1.213397812s 1.233872901s 1.234091896s 1.244842271s 1.251982211s 1.260642779s 1.267210612s 1.283103519s 1.284560101s 1.287426922s 1.289769146s 1.290206508s 1.297738791s 1.298943906s 1.303535837s 1.310569697s 1.319619696s 1.324336481s 1.326062067s 1.331571935s 1.335951174s 1.336344885s 1.340595036s 1.344558616s 1.34773725s 1.347759114s 1.350069046s 1.357300839s 1.363276459s 1.373760574s 1.376701877s 1.379218964s 1.380764833s 1.383202675s 1.383972543s 1.388527912s 1.389252366s 1.389848719s 1.390096788s 1.391109067s 1.3939064s 1.397080584s 1.397929565s 1.399485088s 1.399854908s 1.400297332s 1.403484831s 1.407572105s 1.419828402s 1.423951109s 1.430434417s 1.43322248s 1.433641552s 1.438247518s 1.444006178s 1.449668146s 1.451088285s 1.464410573s 1.470015335s 1.471303289s 1.47350371s 1.474172425s 1.474844615s 1.475458826s 1.476519175s 1.479978232s 1.482320434s 1.483360273s 1.483504036s 1.484719588s 1.486765043s 1.48701834s 1.489873427s 1.493136215s 1.493939179s 1.494059598s 1.497525925s 1.498416045s 1.508409018s 1.521635942s 1.523727951s 1.527649038s 1.528542874s 1.529957403s 1.536127023s 1.542318392s 1.544731377s 1.553535513s 1.55694688s 1.560104703s 1.563832059s 1.564314615s 1.568669404s 1.581493607s 1.588947974s 1.594358351s 1.615973048s 1.631242442s 1.637377968s 1.637888952s 1.638774222s 1.648537242s 1.662126578s 1.665295253s 1.677413457s 1.693774565s 1.702106622s 1.708681687s 1.710217563s 1.710753094s 1.727640619s 1.772935867s 1.773868682s 1.777851462s 1.787332507s 1.808323545s 1.815577549s 1.816195345s 1.830206094s 1.83281528s 1.834179381s 1.840453225s 1.854948071s 1.900988748s 1.925743879s 1.964913343s 2.003316465s 2.067546009s 2.09029574s 2.141542007s 2.19296286s 2.208449141s 2.249897195s 2.279622023s]
Feb 17 13:33:57.299: INFO: 50 %ile: 1.383972543s
Feb 17 13:33:57.299: INFO: 90 %ile: 1.787332507s
Feb 17 13:33:57.299: INFO: 99 %ile: 2.249897195s
Feb 17 13:33:57.299: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:33:57.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-8716" for this suite.
Feb 17 13:34:43.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:34:43.540: INFO: namespace svc-latency-8716 deletion completed in 46.173051594s

• [SLOW TEST:72.975 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:34:43.541: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb 17 13:34:43.757: INFO: Waiting up to 5m0s for pod "downward-api-44af6b66-9bcf-41d1-a6c5-0ae2bd9b1876" in namespace "downward-api-2071" to be "success or failure"
Feb 17 13:34:43.761: INFO: Pod "downward-api-44af6b66-9bcf-41d1-a6c5-0ae2bd9b1876": Phase="Pending", Reason="", readiness=false. Elapsed: 4.373982ms
Feb 17 13:34:45.782: INFO: Pod "downward-api-44af6b66-9bcf-41d1-a6c5-0ae2bd9b1876": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025253663s
Feb 17 13:34:47.793: INFO: Pod "downward-api-44af6b66-9bcf-41d1-a6c5-0ae2bd9b1876": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036459992s
Feb 17 13:34:49.804: INFO: Pod "downward-api-44af6b66-9bcf-41d1-a6c5-0ae2bd9b1876": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047312581s
Feb 17 13:34:51.818: INFO: Pod "downward-api-44af6b66-9bcf-41d1-a6c5-0ae2bd9b1876": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061044189s
Feb 17 13:34:53.828: INFO: Pod "downward-api-44af6b66-9bcf-41d1-a6c5-0ae2bd9b1876": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.070870799s
STEP: Saw pod success
Feb 17 13:34:53.828: INFO: Pod "downward-api-44af6b66-9bcf-41d1-a6c5-0ae2bd9b1876" satisfied condition "success or failure"
Feb 17 13:34:53.834: INFO: Trying to get logs from node iruya-node pod downward-api-44af6b66-9bcf-41d1-a6c5-0ae2bd9b1876 container dapi-container: 
STEP: delete the pod
Feb 17 13:34:54.283: INFO: Waiting for pod downward-api-44af6b66-9bcf-41d1-a6c5-0ae2bd9b1876 to disappear
Feb 17 13:34:54.291: INFO: Pod downward-api-44af6b66-9bcf-41d1-a6c5-0ae2bd9b1876 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:34:54.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2071" for this suite.
Feb 17 13:35:00.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:35:00.467: INFO: namespace downward-api-2071 deletion completed in 6.171911446s

• [SLOW TEST:16.926 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:35:00.468: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-d331398d-e5c2-483e-97eb-ff3daec63355
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-d331398d-e5c2-483e-97eb-ff3daec63355
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:35:12.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5781" for this suite.
Feb 17 13:35:28.873: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:35:28.991: INFO: namespace configmap-5781 deletion completed in 16.176091992s

• [SLOW TEST:28.522 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:35:28.991: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 17 13:35:29.060: INFO: Waiting up to 5m0s for pod "pod-c5a734fa-69f2-4ecb-82ac-1cfc415d7364" in namespace "emptydir-9364" to be "success or failure"
Feb 17 13:35:29.083: INFO: Pod "pod-c5a734fa-69f2-4ecb-82ac-1cfc415d7364": Phase="Pending", Reason="", readiness=false. Elapsed: 22.943451ms
Feb 17 13:35:31.092: INFO: Pod "pod-c5a734fa-69f2-4ecb-82ac-1cfc415d7364": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032134078s
Feb 17 13:35:33.100: INFO: Pod "pod-c5a734fa-69f2-4ecb-82ac-1cfc415d7364": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040309535s
Feb 17 13:35:35.108: INFO: Pod "pod-c5a734fa-69f2-4ecb-82ac-1cfc415d7364": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047909967s
Feb 17 13:35:37.113: INFO: Pod "pod-c5a734fa-69f2-4ecb-82ac-1cfc415d7364": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053134846s
Feb 17 13:35:39.126: INFO: Pod "pod-c5a734fa-69f2-4ecb-82ac-1cfc415d7364": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.065714391s
STEP: Saw pod success
Feb 17 13:35:39.126: INFO: Pod "pod-c5a734fa-69f2-4ecb-82ac-1cfc415d7364" satisfied condition "success or failure"
Feb 17 13:35:39.130: INFO: Trying to get logs from node iruya-node pod pod-c5a734fa-69f2-4ecb-82ac-1cfc415d7364 container test-container: 
STEP: delete the pod
Feb 17 13:35:39.243: INFO: Waiting for pod pod-c5a734fa-69f2-4ecb-82ac-1cfc415d7364 to disappear
Feb 17 13:35:39.254: INFO: Pod pod-c5a734fa-69f2-4ecb-82ac-1cfc415d7364 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:35:39.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9364" for this suite.
Feb 17 13:35:45.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:35:45.472: INFO: namespace emptydir-9364 deletion completed in 6.211707875s

• [SLOW TEST:16.481 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:35:45.474: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-271483a5-4aa9-4f4c-bcc9-2398fc83093b
STEP: Creating a pod to test consume configMaps
Feb 17 13:35:45.579: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c66eabdc-29c3-4e94-9003-fa1aa393dc6d" in namespace "projected-421" to be "success or failure"
Feb 17 13:35:45.595: INFO: Pod "pod-projected-configmaps-c66eabdc-29c3-4e94-9003-fa1aa393dc6d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.141973ms
Feb 17 13:35:47.601: INFO: Pod "pod-projected-configmaps-c66eabdc-29c3-4e94-9003-fa1aa393dc6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021671491s
Feb 17 13:35:49.610: INFO: Pod "pod-projected-configmaps-c66eabdc-29c3-4e94-9003-fa1aa393dc6d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030487657s
Feb 17 13:35:51.618: INFO: Pod "pod-projected-configmaps-c66eabdc-29c3-4e94-9003-fa1aa393dc6d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039275944s
Feb 17 13:35:53.629: INFO: Pod "pod-projected-configmaps-c66eabdc-29c3-4e94-9003-fa1aa393dc6d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050365555s
Feb 17 13:35:55.639: INFO: Pod "pod-projected-configmaps-c66eabdc-29c3-4e94-9003-fa1aa393dc6d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.059528036s
Feb 17 13:35:57.652: INFO: Pod "pod-projected-configmaps-c66eabdc-29c3-4e94-9003-fa1aa393dc6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.072793415s
STEP: Saw pod success
Feb 17 13:35:57.652: INFO: Pod "pod-projected-configmaps-c66eabdc-29c3-4e94-9003-fa1aa393dc6d" satisfied condition "success or failure"
Feb 17 13:35:57.658: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-c66eabdc-29c3-4e94-9003-fa1aa393dc6d container projected-configmap-volume-test: 
STEP: delete the pod
Feb 17 13:35:57.734: INFO: Waiting for pod pod-projected-configmaps-c66eabdc-29c3-4e94-9003-fa1aa393dc6d to disappear
Feb 17 13:35:57.744: INFO: Pod pod-projected-configmaps-c66eabdc-29c3-4e94-9003-fa1aa393dc6d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:35:57.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-421" for this suite.
Feb 17 13:36:03.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:36:03.944: INFO: namespace projected-421 deletion completed in 6.191943548s

• [SLOW TEST:18.470 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:36:03.946: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 17 13:36:04.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-4826'
Feb 17 13:36:06.233: INFO: stderr: ""
Feb 17 13:36:06.233: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Feb 17 13:36:06.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-4826'
Feb 17 13:36:16.580: INFO: stderr: ""
Feb 17 13:36:16.581: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:36:16.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4826" for this suite.
Feb 17 13:36:22.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:36:22.739: INFO: namespace kubectl-4826 deletion completed in 6.122089616s

• [SLOW TEST:18.793 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:36:22.740: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 17 13:36:22.802: INFO: Creating ReplicaSet my-hostname-basic-f9f0e6b2-9a60-4b50-a1c3-012a210db82a
Feb 17 13:36:22.815: INFO: Pod name my-hostname-basic-f9f0e6b2-9a60-4b50-a1c3-012a210db82a: Found 0 pods out of 1
Feb 17 13:36:27.825: INFO: Pod name my-hostname-basic-f9f0e6b2-9a60-4b50-a1c3-012a210db82a: Found 1 pods out of 1
Feb 17 13:36:27.825: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-f9f0e6b2-9a60-4b50-a1c3-012a210db82a" is running
Feb 17 13:36:31.838: INFO: Pod "my-hostname-basic-f9f0e6b2-9a60-4b50-a1c3-012a210db82a-2c8k5" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-17 13:36:22 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-17 13:36:22 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-f9f0e6b2-9a60-4b50-a1c3-012a210db82a]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-17 13:36:22 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-f9f0e6b2-9a60-4b50-a1c3-012a210db82a]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-17 13:36:22 +0000 UTC Reason: Message:}])
Feb 17 13:36:31.838: INFO: Trying to dial the pod
Feb 17 13:36:36.871: INFO: Controller my-hostname-basic-f9f0e6b2-9a60-4b50-a1c3-012a210db82a: Got expected result from replica 1 [my-hostname-basic-f9f0e6b2-9a60-4b50-a1c3-012a210db82a-2c8k5]: "my-hostname-basic-f9f0e6b2-9a60-4b50-a1c3-012a210db82a-2c8k5", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:36:36.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-2737" for this suite.
Feb 17 13:36:42.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:36:43.032: INFO: namespace replicaset-2737 deletion completed in 6.154354336s

• [SLOW TEST:20.293 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:36:43.033: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-63d44a1b-d64b-45b9-971a-6d6c9ad23236
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-63d44a1b-d64b-45b9-971a-6d6c9ad23236
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:38:15.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3334" for this suite.
Feb 17 13:38:37.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:38:37.492: INFO: namespace projected-3334 deletion completed in 22.158040222s

• [SLOW TEST:114.459 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:38:37.493: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-9d546b8d-3668-4ad2-9ef1-708a6fde811f
STEP: Creating a pod to test consume configMaps
Feb 17 13:38:37.606: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1e0be159-fcaf-4927-a53a-cdd2d6c87ea1" in namespace "projected-3555" to be "success or failure"
Feb 17 13:38:37.619: INFO: Pod "pod-projected-configmaps-1e0be159-fcaf-4927-a53a-cdd2d6c87ea1": Phase="Pending", Reason="", readiness=false. Elapsed: 12.631632ms
Feb 17 13:38:39.630: INFO: Pod "pod-projected-configmaps-1e0be159-fcaf-4927-a53a-cdd2d6c87ea1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023966782s
Feb 17 13:38:41.643: INFO: Pod "pod-projected-configmaps-1e0be159-fcaf-4927-a53a-cdd2d6c87ea1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037012342s
Feb 17 13:38:43.653: INFO: Pod "pod-projected-configmaps-1e0be159-fcaf-4927-a53a-cdd2d6c87ea1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046931793s
Feb 17 13:38:45.661: INFO: Pod "pod-projected-configmaps-1e0be159-fcaf-4927-a53a-cdd2d6c87ea1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.054385783s
Feb 17 13:38:47.667: INFO: Pod "pod-projected-configmaps-1e0be159-fcaf-4927-a53a-cdd2d6c87ea1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.061165117s
STEP: Saw pod success
Feb 17 13:38:47.667: INFO: Pod "pod-projected-configmaps-1e0be159-fcaf-4927-a53a-cdd2d6c87ea1" satisfied condition "success or failure"
Feb 17 13:38:47.670: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-1e0be159-fcaf-4927-a53a-cdd2d6c87ea1 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 17 13:38:47.750: INFO: Waiting for pod pod-projected-configmaps-1e0be159-fcaf-4927-a53a-cdd2d6c87ea1 to disappear
Feb 17 13:38:47.755: INFO: Pod pod-projected-configmaps-1e0be159-fcaf-4927-a53a-cdd2d6c87ea1 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:38:47.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3555" for this suite.
Feb 17 13:38:55.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:38:55.957: INFO: namespace projected-3555 deletion completed in 8.196035777s

• [SLOW TEST:18.464 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:38:55.957: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-kd5w
STEP: Creating a pod to test atomic-volume-subpath
Feb 17 13:38:56.078: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-kd5w" in namespace "subpath-2733" to be "success or failure"
Feb 17 13:38:56.089: INFO: Pod "pod-subpath-test-secret-kd5w": Phase="Pending", Reason="", readiness=false. Elapsed: 11.026336ms
Feb 17 13:38:58.124: INFO: Pod "pod-subpath-test-secret-kd5w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04598842s
Feb 17 13:39:00.146: INFO: Pod "pod-subpath-test-secret-kd5w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067712707s
Feb 17 13:39:02.152: INFO: Pod "pod-subpath-test-secret-kd5w": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07387045s
Feb 17 13:39:04.161: INFO: Pod "pod-subpath-test-secret-kd5w": Phase="Pending", Reason="", readiness=false. Elapsed: 8.08316266s
Feb 17 13:39:06.171: INFO: Pod "pod-subpath-test-secret-kd5w": Phase="Running", Reason="", readiness=true. Elapsed: 10.093131458s
Feb 17 13:39:08.179: INFO: Pod "pod-subpath-test-secret-kd5w": Phase="Running", Reason="", readiness=true. Elapsed: 12.101359039s
Feb 17 13:39:10.187: INFO: Pod "pod-subpath-test-secret-kd5w": Phase="Running", Reason="", readiness=true. Elapsed: 14.109609177s
Feb 17 13:39:12.199: INFO: Pod "pod-subpath-test-secret-kd5w": Phase="Running", Reason="", readiness=true. Elapsed: 16.121016318s
Feb 17 13:39:14.205: INFO: Pod "pod-subpath-test-secret-kd5w": Phase="Running", Reason="", readiness=true. Elapsed: 18.126740513s
Feb 17 13:39:16.213: INFO: Pod "pod-subpath-test-secret-kd5w": Phase="Running", Reason="", readiness=true. Elapsed: 20.135042206s
Feb 17 13:39:18.223: INFO: Pod "pod-subpath-test-secret-kd5w": Phase="Running", Reason="", readiness=true. Elapsed: 22.145204189s
Feb 17 13:39:20.230: INFO: Pod "pod-subpath-test-secret-kd5w": Phase="Running", Reason="", readiness=true. Elapsed: 24.152389386s
Feb 17 13:39:22.236: INFO: Pod "pod-subpath-test-secret-kd5w": Phase="Running", Reason="", readiness=true. Elapsed: 26.158265517s
Feb 17 13:39:24.248: INFO: Pod "pod-subpath-test-secret-kd5w": Phase="Running", Reason="", readiness=true. Elapsed: 28.170550008s
Feb 17 13:39:26.257: INFO: Pod "pod-subpath-test-secret-kd5w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.179119699s
STEP: Saw pod success
Feb 17 13:39:26.257: INFO: Pod "pod-subpath-test-secret-kd5w" satisfied condition "success or failure"
Feb 17 13:39:26.263: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-kd5w container test-container-subpath-secret-kd5w: 
STEP: delete the pod
Feb 17 13:39:26.352: INFO: Waiting for pod pod-subpath-test-secret-kd5w to disappear
Feb 17 13:39:26.370: INFO: Pod pod-subpath-test-secret-kd5w no longer exists
STEP: Deleting pod pod-subpath-test-secret-kd5w
Feb 17 13:39:26.370: INFO: Deleting pod "pod-subpath-test-secret-kd5w" in namespace "subpath-2733"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:39:26.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2733" for this suite.
Feb 17 13:39:32.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:39:32.624: INFO: namespace subpath-2733 deletion completed in 6.172602599s

• [SLOW TEST:36.666 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:39:32.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 17 13:39:32.777: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.763703ms)
Feb 17 13:39:32.783: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.956234ms)
Feb 17 13:39:32.788: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.902422ms)
Feb 17 13:39:32.793: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.108632ms)
Feb 17 13:39:32.797: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.985957ms)
Feb 17 13:39:32.802: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.493093ms)
Feb 17 13:39:32.836: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 34.738217ms)
Feb 17 13:39:32.847: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.906217ms)
Feb 17 13:39:32.856: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.906065ms)
Feb 17 13:39:32.864: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.519575ms)
Feb 17 13:39:32.870: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.34256ms)
Feb 17 13:39:32.877: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.300048ms)
Feb 17 13:39:32.882: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.122989ms)
Feb 17 13:39:32.887: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.21652ms)
Feb 17 13:39:32.893: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.78621ms)
Feb 17 13:39:32.899: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.471479ms)
Feb 17 13:39:32.909: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.194358ms)
Feb 17 13:39:32.914: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.263782ms)
Feb 17 13:39:32.919: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.629058ms)
Feb 17 13:39:32.923: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.291118ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:39:32.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-1235" for this suite.
Feb 17 13:39:38.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:39:39.132: INFO: namespace proxy-1235 deletion completed in 6.204562922s

• [SLOW TEST:6.508 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:39:39.132: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-8673
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 17 13:39:39.176: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 17 13:40:13.475: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8673 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 17 13:40:13.475: INFO: >>> kubeConfig: /root/.kube/config
I0217 13:40:13.563251       8 log.go:172] (0xc0013944d0) (0xc001923ae0) Create stream
I0217 13:40:13.563314       8 log.go:172] (0xc0013944d0) (0xc001923ae0) Stream added, broadcasting: 1
I0217 13:40:13.571992       8 log.go:172] (0xc0013944d0) Reply frame received for 1
I0217 13:40:13.572052       8 log.go:172] (0xc0013944d0) (0xc001923c20) Create stream
I0217 13:40:13.572069       8 log.go:172] (0xc0013944d0) (0xc001923c20) Stream added, broadcasting: 3
I0217 13:40:13.575516       8 log.go:172] (0xc0013944d0) Reply frame received for 3
I0217 13:40:13.575586       8 log.go:172] (0xc0013944d0) (0xc0026b6aa0) Create stream
I0217 13:40:13.575609       8 log.go:172] (0xc0013944d0) (0xc0026b6aa0) Stream added, broadcasting: 5
I0217 13:40:13.577736       8 log.go:172] (0xc0013944d0) Reply frame received for 5
I0217 13:40:14.828466       8 log.go:172] (0xc0013944d0) Data frame received for 3
I0217 13:40:14.828579       8 log.go:172] (0xc001923c20) (3) Data frame handling
I0217 13:40:14.828604       8 log.go:172] (0xc001923c20) (3) Data frame sent
I0217 13:40:15.081199       8 log.go:172] (0xc0013944d0) (0xc001923c20) Stream removed, broadcasting: 3
I0217 13:40:15.081563       8 log.go:172] (0xc0013944d0) (0xc0026b6aa0) Stream removed, broadcasting: 5
I0217 13:40:15.081706       8 log.go:172] (0xc0013944d0) Data frame received for 1
I0217 13:40:15.081734       8 log.go:172] (0xc001923ae0) (1) Data frame handling
I0217 13:40:15.081762       8 log.go:172] (0xc001923ae0) (1) Data frame sent
I0217 13:40:15.081797       8 log.go:172] (0xc0013944d0) (0xc001923ae0) Stream removed, broadcasting: 1
I0217 13:40:15.081826       8 log.go:172] (0xc0013944d0) Go away received
I0217 13:40:15.082849       8 log.go:172] (0xc0013944d0) (0xc001923ae0) Stream removed, broadcasting: 1
I0217 13:40:15.082873       8 log.go:172] (0xc0013944d0) (0xc001923c20) Stream removed, broadcasting: 3
I0217 13:40:15.082888       8 log.go:172] (0xc0013944d0) (0xc0026b6aa0) Stream removed, broadcasting: 5
Feb 17 13:40:15.082: INFO: Found all expected endpoints: [netserver-0]
Feb 17 13:40:15.095: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8673 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 17 13:40:15.096: INFO: >>> kubeConfig: /root/.kube/config
I0217 13:40:15.177653       8 log.go:172] (0xc0013bc790) (0xc001225400) Create stream
I0217 13:40:15.178091       8 log.go:172] (0xc0013bc790) (0xc001225400) Stream added, broadcasting: 1
I0217 13:40:15.195275       8 log.go:172] (0xc0013bc790) Reply frame received for 1
I0217 13:40:15.195523       8 log.go:172] (0xc0013bc790) (0xc001ace460) Create stream
I0217 13:40:15.195551       8 log.go:172] (0xc0013bc790) (0xc001ace460) Stream added, broadcasting: 3
I0217 13:40:15.198396       8 log.go:172] (0xc0013bc790) Reply frame received for 3
I0217 13:40:15.198511       8 log.go:172] (0xc0013bc790) (0xc0012255e0) Create stream
I0217 13:40:15.198538       8 log.go:172] (0xc0013bc790) (0xc0012255e0) Stream added, broadcasting: 5
I0217 13:40:15.202330       8 log.go:172] (0xc0013bc790) Reply frame received for 5
I0217 13:40:16.417791       8 log.go:172] (0xc0013bc790) Data frame received for 3
I0217 13:40:16.418033       8 log.go:172] (0xc001ace460) (3) Data frame handling
I0217 13:40:16.418107       8 log.go:172] (0xc001ace460) (3) Data frame sent
I0217 13:40:16.648424       8 log.go:172] (0xc0013bc790) Data frame received for 1
I0217 13:40:16.648637       8 log.go:172] (0xc0013bc790) (0xc001ace460) Stream removed, broadcasting: 3
I0217 13:40:16.648725       8 log.go:172] (0xc001225400) (1) Data frame handling
I0217 13:40:16.648755       8 log.go:172] (0xc001225400) (1) Data frame sent
I0217 13:40:16.648783       8 log.go:172] (0xc0013bc790) (0xc001225400) Stream removed, broadcasting: 1
I0217 13:40:16.649289       8 log.go:172] (0xc0013bc790) (0xc0012255e0) Stream removed, broadcasting: 5
I0217 13:40:16.649380       8 log.go:172] (0xc0013bc790) Go away received
I0217 13:40:16.649484       8 log.go:172] (0xc0013bc790) (0xc001225400) Stream removed, broadcasting: 1
I0217 13:40:16.649527       8 log.go:172] (0xc0013bc790) (0xc001ace460) Stream removed, broadcasting: 3
I0217 13:40:16.649610       8 log.go:172] (0xc0013bc790) (0xc0012255e0) Stream removed, broadcasting: 5
Feb 17 13:40:16.649: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:40:16.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8673" for this suite.
Feb 17 13:40:28.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:40:28.807: INFO: namespace pod-network-test-8673 deletion completed in 12.146834237s

• [SLOW TEST:49.675 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:40:28.809: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 17 13:40:28.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-7695'
Feb 17 13:40:29.098: INFO: stderr: ""
Feb 17 13:40:29.098: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Feb 17 13:40:39.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-7695 -o json'
Feb 17 13:40:39.293: INFO: stderr: ""
Feb 17 13:40:39.293: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-02-17T13:40:29Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-7695\",\n        \"resourceVersion\": \"24702076\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-7695/pods/e2e-test-nginx-pod\",\n        \"uid\": \"f080fce3-30b4-4002-a3d7-b865cab9e1a8\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-rvcw7\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-rvcw7\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-rvcw7\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-17T13:40:29Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-17T13:40:37Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-17T13:40:37Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-17T13:40:29Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://c15af3ff41032811fe31ee30936d064c4dd33ceb9a954f982f380629f0d5f358\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-02-17T13:40:37Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.3.65\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-02-17T13:40:29Z\"\n    }\n}\n"
STEP: replace the image in the pod
Feb 17 13:40:39.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-7695'
Feb 17 13:40:39.652: INFO: stderr: ""
Feb 17 13:40:39.652: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Feb 17 13:40:39.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-7695'
Feb 17 13:40:56.540: INFO: stderr: ""
Feb 17 13:40:56.540: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:40:56.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7695" for this suite.
Feb 17 13:41:02.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:41:02.826: INFO: namespace kubectl-7695 deletion completed in 6.260157448s

• [SLOW TEST:34.018 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:41:02.827: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Feb 17 13:41:02.919: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Feb 17 13:41:03.488: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Feb 17 13:41:05.812: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717543663, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717543663, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717543663, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717543663, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 17 13:41:07.826: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717543663, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717543663, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717543663, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717543663, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 17 13:41:09.825: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717543663, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717543663, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717543663, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717543663, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 17 13:41:11.822: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717543663, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717543663, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717543663, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717543663, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 17 13:41:13.824: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717543663, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717543663, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717543663, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717543663, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 17 13:41:20.929: INFO: Waited 5.090769801s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:41:21.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-2314" for this suite.
Feb 17 13:41:27.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:41:27.822: INFO: namespace aggregator-2314 deletion completed in 6.19577963s

• [SLOW TEST:24.995 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:41:27.823: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-85ad497f-629c-4acf-9671-58135c4647c4 in namespace container-probe-8627
Feb 17 13:41:35.989: INFO: Started pod busybox-85ad497f-629c-4acf-9671-58135c4647c4 in namespace container-probe-8627
STEP: checking the pod's current state and verifying that restartCount is present
Feb 17 13:41:35.997: INFO: Initial restart count of pod busybox-85ad497f-629c-4acf-9671-58135c4647c4 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:45:37.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8627" for this suite.
Feb 17 13:45:43.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:45:43.927: INFO: namespace container-probe-8627 deletion completed in 6.150818714s

• [SLOW TEST:256.104 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:45:43.928: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Feb 17 13:45:44.608: INFO: created pod pod-service-account-defaultsa
Feb 17 13:45:44.608: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Feb 17 13:45:44.642: INFO: created pod pod-service-account-mountsa
Feb 17 13:45:44.643: INFO: pod pod-service-account-mountsa service account token volume mount: true
Feb 17 13:45:44.695: INFO: created pod pod-service-account-nomountsa
Feb 17 13:45:44.695: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Feb 17 13:45:44.715: INFO: created pod pod-service-account-defaultsa-mountspec
Feb 17 13:45:44.716: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Feb 17 13:45:44.740: INFO: created pod pod-service-account-mountsa-mountspec
Feb 17 13:45:44.740: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Feb 17 13:45:44.753: INFO: created pod pod-service-account-nomountsa-mountspec
Feb 17 13:45:44.753: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Feb 17 13:45:44.790: INFO: created pod pod-service-account-defaultsa-nomountspec
Feb 17 13:45:44.790: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Feb 17 13:45:44.883: INFO: created pod pod-service-account-mountsa-nomountspec
Feb 17 13:45:44.883: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Feb 17 13:45:45.871: INFO: created pod pod-service-account-nomountsa-nomountspec
Feb 17 13:45:45.871: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:45:45.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-5201" for this suite.
Feb 17 13:46:15.170: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:46:15.280: INFO: namespace svcaccounts-5201 deletion completed in 29.022323046s

• [SLOW TEST:31.352 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:46:15.280: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb 17 13:46:15.600: INFO: Waiting up to 5m0s for pod "downward-api-37915066-c1dd-4e0b-9632-bfdbed926d1b" in namespace "downward-api-8712" to be "success or failure"
Feb 17 13:46:15.616: INFO: Pod "downward-api-37915066-c1dd-4e0b-9632-bfdbed926d1b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.596598ms
Feb 17 13:46:17.634: INFO: Pod "downward-api-37915066-c1dd-4e0b-9632-bfdbed926d1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033567627s
Feb 17 13:46:19.647: INFO: Pod "downward-api-37915066-c1dd-4e0b-9632-bfdbed926d1b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047123716s
Feb 17 13:46:21.657: INFO: Pod "downward-api-37915066-c1dd-4e0b-9632-bfdbed926d1b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056480563s
Feb 17 13:46:23.672: INFO: Pod "downward-api-37915066-c1dd-4e0b-9632-bfdbed926d1b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.072113008s
Feb 17 13:46:25.678: INFO: Pod "downward-api-37915066-c1dd-4e0b-9632-bfdbed926d1b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.077696612s
Feb 17 13:46:27.688: INFO: Pod "downward-api-37915066-c1dd-4e0b-9632-bfdbed926d1b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.088039997s
Feb 17 13:46:29.811: INFO: Pod "downward-api-37915066-c1dd-4e0b-9632-bfdbed926d1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.210426761s
STEP: Saw pod success
Feb 17 13:46:29.811: INFO: Pod "downward-api-37915066-c1dd-4e0b-9632-bfdbed926d1b" satisfied condition "success or failure"
Feb 17 13:46:29.816: INFO: Trying to get logs from node iruya-node pod downward-api-37915066-c1dd-4e0b-9632-bfdbed926d1b container dapi-container: 
STEP: delete the pod
Feb 17 13:46:30.019: INFO: Waiting for pod downward-api-37915066-c1dd-4e0b-9632-bfdbed926d1b to disappear
Feb 17 13:46:30.038: INFO: Pod downward-api-37915066-c1dd-4e0b-9632-bfdbed926d1b no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:46:30.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8712" for this suite.
Feb 17 13:46:36.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:46:36.294: INFO: namespace downward-api-8712 deletion completed in 6.239726105s

• [SLOW TEST:21.014 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:46:36.296: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb 17 13:46:45.144: INFO: Successfully updated pod "annotationupdate7514f9e0-dda0-45cc-810b-cb2621c123ec"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:46:49.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3250" for this suite.
Feb 17 13:47:11.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:47:11.452: INFO: namespace downward-api-3250 deletion completed in 22.184672337s

• [SLOW TEST:35.156 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:47:11.453: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 17 13:47:11.618: INFO: Number of nodes with available pods: 0
Feb 17 13:47:11.618: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:47:12.644: INFO: Number of nodes with available pods: 0
Feb 17 13:47:12.644: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:47:13.638: INFO: Number of nodes with available pods: 0
Feb 17 13:47:13.638: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:47:15.021: INFO: Number of nodes with available pods: 0
Feb 17 13:47:15.021: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:47:15.638: INFO: Number of nodes with available pods: 0
Feb 17 13:47:15.638: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:47:16.643: INFO: Number of nodes with available pods: 0
Feb 17 13:47:16.643: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:47:18.553: INFO: Number of nodes with available pods: 0
Feb 17 13:47:18.553: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:47:19.137: INFO: Number of nodes with available pods: 0
Feb 17 13:47:19.137: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:47:19.650: INFO: Number of nodes with available pods: 0
Feb 17 13:47:19.650: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:47:20.763: INFO: Number of nodes with available pods: 0
Feb 17 13:47:20.763: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:47:21.643: INFO: Number of nodes with available pods: 0
Feb 17 13:47:21.643: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:47:22.653: INFO: Number of nodes with available pods: 1
Feb 17 13:47:22.653: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:47:23.634: INFO: Number of nodes with available pods: 2
Feb 17 13:47:23.635: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Feb 17 13:47:23.712: INFO: Number of nodes with available pods: 1
Feb 17 13:47:23.713: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:47:24.737: INFO: Number of nodes with available pods: 1
Feb 17 13:47:24.737: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:47:25.727: INFO: Number of nodes with available pods: 1
Feb 17 13:47:25.727: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:47:26.727: INFO: Number of nodes with available pods: 1
Feb 17 13:47:26.727: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:47:27.736: INFO: Number of nodes with available pods: 1
Feb 17 13:47:27.736: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:47:28.731: INFO: Number of nodes with available pods: 1
Feb 17 13:47:28.731: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:47:29.792: INFO: Number of nodes with available pods: 1
Feb 17 13:47:29.792: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:47:30.733: INFO: Number of nodes with available pods: 1
Feb 17 13:47:30.733: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:47:31.727: INFO: Number of nodes with available pods: 1
Feb 17 13:47:31.727: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:47:32.732: INFO: Number of nodes with available pods: 1
Feb 17 13:47:32.732: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:47:33.728: INFO: Number of nodes with available pods: 1
Feb 17 13:47:33.728: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:47:34.723: INFO: Number of nodes with available pods: 1
Feb 17 13:47:34.723: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:47:35.724: INFO: Number of nodes with available pods: 1
Feb 17 13:47:35.724: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:47:36.723: INFO: Number of nodes with available pods: 1
Feb 17 13:47:36.723: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:47:37.756: INFO: Number of nodes with available pods: 2
Feb 17 13:47:37.756: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9020, will wait for the garbage collector to delete the pods
Feb 17 13:47:37.849: INFO: Deleting DaemonSet.extensions daemon-set took: 28.564531ms
Feb 17 13:47:38.150: INFO: Terminating DaemonSet.extensions daemon-set pods took: 301.21127ms
Feb 17 13:47:56.661: INFO: Number of nodes with available pods: 0
Feb 17 13:47:56.661: INFO: Number of running nodes: 0, number of available pods: 0
Feb 17 13:47:56.666: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9020/daemonsets","resourceVersion":"24702981"},"items":null}

Feb 17 13:47:56.670: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9020/pods","resourceVersion":"24702981"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:47:56.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9020" for this suite.
Feb 17 13:48:02.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:48:02.852: INFO: namespace daemonsets-9020 deletion completed in 6.16048769s

• [SLOW TEST:51.400 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:48:02.855: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb 17 13:48:02.991: INFO: Waiting up to 5m0s for pod "pod-d438a99d-ecc8-4260-bff9-e0b9e913477c" in namespace "emptydir-3439" to be "success or failure"
Feb 17 13:48:03.002: INFO: Pod "pod-d438a99d-ecc8-4260-bff9-e0b9e913477c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.868676ms
Feb 17 13:48:05.009: INFO: Pod "pod-d438a99d-ecc8-4260-bff9-e0b9e913477c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017976437s
Feb 17 13:48:07.021: INFO: Pod "pod-d438a99d-ecc8-4260-bff9-e0b9e913477c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029973027s
Feb 17 13:48:09.028: INFO: Pod "pod-d438a99d-ecc8-4260-bff9-e0b9e913477c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037548987s
Feb 17 13:48:11.043: INFO: Pod "pod-d438a99d-ecc8-4260-bff9-e0b9e913477c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052002316s
Feb 17 13:48:13.059: INFO: Pod "pod-d438a99d-ecc8-4260-bff9-e0b9e913477c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.067938344s
STEP: Saw pod success
Feb 17 13:48:13.059: INFO: Pod "pod-d438a99d-ecc8-4260-bff9-e0b9e913477c" satisfied condition "success or failure"
Feb 17 13:48:13.063: INFO: Trying to get logs from node iruya-node pod pod-d438a99d-ecc8-4260-bff9-e0b9e913477c container test-container: 
STEP: delete the pod
Feb 17 13:48:13.118: INFO: Waiting for pod pod-d438a99d-ecc8-4260-bff9-e0b9e913477c to disappear
Feb 17 13:48:13.125: INFO: Pod pod-d438a99d-ecc8-4260-bff9-e0b9e913477c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:48:13.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3439" for this suite.
Feb 17 13:48:19.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:48:19.346: INFO: namespace emptydir-3439 deletion completed in 6.211550916s

• [SLOW TEST:16.491 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:48:19.347: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-be29e472-364c-48c0-ac70-324277fc019d
STEP: Creating a pod to test consume configMaps
Feb 17 13:48:19.432: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-21bd871c-a718-4390-8385-e09fdc3e4e31" in namespace "projected-5636" to be "success or failure"
Feb 17 13:48:19.445: INFO: Pod "pod-projected-configmaps-21bd871c-a718-4390-8385-e09fdc3e4e31": Phase="Pending", Reason="", readiness=false. Elapsed: 13.109841ms
Feb 17 13:48:21.453: INFO: Pod "pod-projected-configmaps-21bd871c-a718-4390-8385-e09fdc3e4e31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021240971s
Feb 17 13:48:23.487: INFO: Pod "pod-projected-configmaps-21bd871c-a718-4390-8385-e09fdc3e4e31": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055627383s
Feb 17 13:48:25.498: INFO: Pod "pod-projected-configmaps-21bd871c-a718-4390-8385-e09fdc3e4e31": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066171226s
Feb 17 13:48:27.509: INFO: Pod "pod-projected-configmaps-21bd871c-a718-4390-8385-e09fdc3e4e31": Phase="Pending", Reason="", readiness=false. Elapsed: 8.077323868s
Feb 17 13:48:29.517: INFO: Pod "pod-projected-configmaps-21bd871c-a718-4390-8385-e09fdc3e4e31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.085783523s
STEP: Saw pod success
Feb 17 13:48:29.517: INFO: Pod "pod-projected-configmaps-21bd871c-a718-4390-8385-e09fdc3e4e31" satisfied condition "success or failure"
Feb 17 13:48:29.523: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-21bd871c-a718-4390-8385-e09fdc3e4e31 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 17 13:48:29.946: INFO: Waiting for pod pod-projected-configmaps-21bd871c-a718-4390-8385-e09fdc3e4e31 to disappear
Feb 17 13:48:29.979: INFO: Pod pod-projected-configmaps-21bd871c-a718-4390-8385-e09fdc3e4e31 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:48:29.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5636" for this suite.
Feb 17 13:48:36.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:48:36.203: INFO: namespace projected-5636 deletion completed in 6.199702611s

• [SLOW TEST:16.856 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:48:36.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 17 13:48:36.421: INFO: Number of nodes with available pods: 0
Feb 17 13:48:36.421: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:48:37.794: INFO: Number of nodes with available pods: 0
Feb 17 13:48:37.795: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:48:38.454: INFO: Number of nodes with available pods: 0
Feb 17 13:48:38.454: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:48:39.625: INFO: Number of nodes with available pods: 0
Feb 17 13:48:39.625: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:48:40.436: INFO: Number of nodes with available pods: 0
Feb 17 13:48:40.436: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:48:41.436: INFO: Number of nodes with available pods: 0
Feb 17 13:48:41.436: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:48:43.104: INFO: Number of nodes with available pods: 0
Feb 17 13:48:43.104: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:48:43.440: INFO: Number of nodes with available pods: 0
Feb 17 13:48:43.440: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:48:44.434: INFO: Number of nodes with available pods: 0
Feb 17 13:48:44.434: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:48:45.470: INFO: Number of nodes with available pods: 1
Feb 17 13:48:45.470: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:48:46.438: INFO: Number of nodes with available pods: 1
Feb 17 13:48:46.438: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:48:47.440: INFO: Number of nodes with available pods: 2
Feb 17 13:48:47.440: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Feb 17 13:48:47.527: INFO: Number of nodes with available pods: 1
Feb 17 13:48:47.528: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:48:48.543: INFO: Number of nodes with available pods: 1
Feb 17 13:48:48.543: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:48:49.545: INFO: Number of nodes with available pods: 1
Feb 17 13:48:49.545: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:48:50.626: INFO: Number of nodes with available pods: 1
Feb 17 13:48:50.626: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:48:51.551: INFO: Number of nodes with available pods: 1
Feb 17 13:48:51.551: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:48:52.549: INFO: Number of nodes with available pods: 1
Feb 17 13:48:52.549: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:48:53.541: INFO: Number of nodes with available pods: 1
Feb 17 13:48:53.541: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:48:54.578: INFO: Number of nodes with available pods: 1
Feb 17 13:48:54.578: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:48:55.544: INFO: Number of nodes with available pods: 1
Feb 17 13:48:55.544: INFO: Node iruya-node is running more than one daemon pod
Feb 17 13:48:56.564: INFO: Number of nodes with available pods: 2
Feb 17 13:48:56.564: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-749, will wait for the garbage collector to delete the pods
Feb 17 13:48:56.691: INFO: Deleting DaemonSet.extensions daemon-set took: 35.536153ms
Feb 17 13:48:56.992: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.794011ms
Feb 17 13:49:07.908: INFO: Number of nodes with available pods: 0
Feb 17 13:49:07.908: INFO: Number of running nodes: 0, number of available pods: 0
Feb 17 13:49:07.920: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-749/daemonsets","resourceVersion":"24703204"},"items":null}

Feb 17 13:49:07.924: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-749/pods","resourceVersion":"24703204"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:49:07.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-749" for this suite.
Feb 17 13:49:14.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:49:14.166: INFO: namespace daemonsets-749 deletion completed in 6.167783729s

• [SLOW TEST:37.962 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:49:14.166: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:49:14.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7285" for this suite.
Feb 17 13:49:36.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:49:36.523: INFO: namespace pods-7285 deletion completed in 22.191158476s

• [SLOW TEST:22.357 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:49:36.524: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 17 13:49:47.260: INFO: Successfully updated pod "pod-update-ad5fe602-b30e-460b-aaa6-e72fdab53e74"
STEP: verifying the updated pod is in kubernetes
Feb 17 13:49:47.300: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:49:47.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3717" for this suite.
Feb 17 13:50:09.327: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:50:09.543: INFO: namespace pods-3717 deletion completed in 22.236652169s

• [SLOW TEST:33.018 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:50:09.543: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Feb 17 13:50:09.664: INFO: Waiting up to 5m0s for pod "client-containers-84b09acb-c168-43d7-b279-02046ff61f2b" in namespace "containers-8411" to be "success or failure"
Feb 17 13:50:09.674: INFO: Pod "client-containers-84b09acb-c168-43d7-b279-02046ff61f2b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.006528ms
Feb 17 13:50:11.686: INFO: Pod "client-containers-84b09acb-c168-43d7-b279-02046ff61f2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021152573s
Feb 17 13:50:13.701: INFO: Pod "client-containers-84b09acb-c168-43d7-b279-02046ff61f2b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036130024s
Feb 17 13:50:15.709: INFO: Pod "client-containers-84b09acb-c168-43d7-b279-02046ff61f2b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044595643s
Feb 17 13:50:17.721: INFO: Pod "client-containers-84b09acb-c168-43d7-b279-02046ff61f2b": Phase="Running", Reason="", readiness=true. Elapsed: 8.056898254s
Feb 17 13:50:19.731: INFO: Pod "client-containers-84b09acb-c168-43d7-b279-02046ff61f2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.067021544s
STEP: Saw pod success
Feb 17 13:50:19.731: INFO: Pod "client-containers-84b09acb-c168-43d7-b279-02046ff61f2b" satisfied condition "success or failure"
Feb 17 13:50:19.736: INFO: Trying to get logs from node iruya-node pod client-containers-84b09acb-c168-43d7-b279-02046ff61f2b container test-container: 
STEP: delete the pod
Feb 17 13:50:19.994: INFO: Waiting for pod client-containers-84b09acb-c168-43d7-b279-02046ff61f2b to disappear
Feb 17 13:50:20.004: INFO: Pod client-containers-84b09acb-c168-43d7-b279-02046ff61f2b no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:50:20.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8411" for this suite.
Feb 17 13:50:26.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:50:26.185: INFO: namespace containers-8411 deletion completed in 6.173110761s

• [SLOW TEST:16.642 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:50:26.186: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-6f0fc85e-c7bc-488c-87bb-7cca36fe9374
STEP: Creating configMap with name cm-test-opt-upd-be363650-30ff-4e48-a6be-90de65a73f33
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-6f0fc85e-c7bc-488c-87bb-7cca36fe9374
STEP: Updating configmap cm-test-opt-upd-be363650-30ff-4e48-a6be-90de65a73f33
STEP: Creating configMap with name cm-test-opt-create-49d81471-37eb-440a-a32c-a81cb5ea7c59
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:50:43.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-355" for this suite.
Feb 17 13:51:05.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:51:05.474: INFO: namespace configmap-355 deletion completed in 22.157143225s

• [SLOW TEST:39.288 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:51:05.475: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 17 13:51:05.631: INFO: Waiting up to 5m0s for pod "downwardapi-volume-50bd49ec-e8e3-47ab-9f8c-1c14a813a24f" in namespace "downward-api-6647" to be "success or failure"
Feb 17 13:51:05.650: INFO: Pod "downwardapi-volume-50bd49ec-e8e3-47ab-9f8c-1c14a813a24f": Phase="Pending", Reason="", readiness=false. Elapsed: 19.354225ms
Feb 17 13:51:07.662: INFO: Pod "downwardapi-volume-50bd49ec-e8e3-47ab-9f8c-1c14a813a24f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030773768s
Feb 17 13:51:09.674: INFO: Pod "downwardapi-volume-50bd49ec-e8e3-47ab-9f8c-1c14a813a24f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042776813s
Feb 17 13:51:11.681: INFO: Pod "downwardapi-volume-50bd49ec-e8e3-47ab-9f8c-1c14a813a24f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050457509s
Feb 17 13:51:13.691: INFO: Pod "downwardapi-volume-50bd49ec-e8e3-47ab-9f8c-1c14a813a24f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.060055701s
Feb 17 13:51:15.705: INFO: Pod "downwardapi-volume-50bd49ec-e8e3-47ab-9f8c-1c14a813a24f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.074510588s
Feb 17 13:51:17.723: INFO: Pod "downwardapi-volume-50bd49ec-e8e3-47ab-9f8c-1c14a813a24f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.092109446s
STEP: Saw pod success
Feb 17 13:51:17.723: INFO: Pod "downwardapi-volume-50bd49ec-e8e3-47ab-9f8c-1c14a813a24f" satisfied condition "success or failure"
Feb 17 13:51:17.729: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-50bd49ec-e8e3-47ab-9f8c-1c14a813a24f container client-container: 
STEP: delete the pod
Feb 17 13:51:18.185: INFO: Waiting for pod downwardapi-volume-50bd49ec-e8e3-47ab-9f8c-1c14a813a24f to disappear
Feb 17 13:51:18.222: INFO: Pod downwardapi-volume-50bd49ec-e8e3-47ab-9f8c-1c14a813a24f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:51:18.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6647" for this suite.
Feb 17 13:51:24.362: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:51:24.452: INFO: namespace downward-api-6647 deletion completed in 6.222824286s

• [SLOW TEST:18.978 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:51:24.453: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-3435d760-8414-471c-910c-efb277a9c38c
STEP: Creating a pod to test consume configMaps
Feb 17 13:51:24.547: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a8012a37-191d-4e26-a467-46868034babb" in namespace "projected-1384" to be "success or failure"
Feb 17 13:51:24.550: INFO: Pod "pod-projected-configmaps-a8012a37-191d-4e26-a467-46868034babb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.516069ms
Feb 17 13:51:26.559: INFO: Pod "pod-projected-configmaps-a8012a37-191d-4e26-a467-46868034babb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011815151s
Feb 17 13:51:28.566: INFO: Pod "pod-projected-configmaps-a8012a37-191d-4e26-a467-46868034babb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019086259s
Feb 17 13:51:30.645: INFO: Pod "pod-projected-configmaps-a8012a37-191d-4e26-a467-46868034babb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097862198s
Feb 17 13:51:32.653: INFO: Pod "pod-projected-configmaps-a8012a37-191d-4e26-a467-46868034babb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.105847792s
Feb 17 13:51:34.746: INFO: Pod "pod-projected-configmaps-a8012a37-191d-4e26-a467-46868034babb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.199003523s
STEP: Saw pod success
Feb 17 13:51:34.746: INFO: Pod "pod-projected-configmaps-a8012a37-191d-4e26-a467-46868034babb" satisfied condition "success or failure"
Feb 17 13:51:34.750: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-a8012a37-191d-4e26-a467-46868034babb container projected-configmap-volume-test: 
STEP: delete the pod
Feb 17 13:51:34.795: INFO: Waiting for pod pod-projected-configmaps-a8012a37-191d-4e26-a467-46868034babb to disappear
Feb 17 13:51:34.798: INFO: Pod pod-projected-configmaps-a8012a37-191d-4e26-a467-46868034babb no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:51:34.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1384" for this suite.
Feb 17 13:51:40.822: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:51:41.021: INFO: namespace projected-1384 deletion completed in 6.218255761s

• [SLOW TEST:16.569 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:51:41.022: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-678c0fa3-9335-4f31-a27e-74ed95eefd52
STEP: Creating a pod to test consume configMaps
Feb 17 13:51:41.155: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cfae7c82-6df8-41e2-8ab4-3aa685e60dde" in namespace "projected-3079" to be "success or failure"
Feb 17 13:51:41.164: INFO: Pod "pod-projected-configmaps-cfae7c82-6df8-41e2-8ab4-3aa685e60dde": Phase="Pending", Reason="", readiness=false. Elapsed: 8.435799ms
Feb 17 13:51:43.175: INFO: Pod "pod-projected-configmaps-cfae7c82-6df8-41e2-8ab4-3aa685e60dde": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020100772s
Feb 17 13:51:45.183: INFO: Pod "pod-projected-configmaps-cfae7c82-6df8-41e2-8ab4-3aa685e60dde": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027649556s
Feb 17 13:51:47.192: INFO: Pod "pod-projected-configmaps-cfae7c82-6df8-41e2-8ab4-3aa685e60dde": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036808042s
Feb 17 13:51:49.205: INFO: Pod "pod-projected-configmaps-cfae7c82-6df8-41e2-8ab4-3aa685e60dde": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.050111803s
STEP: Saw pod success
Feb 17 13:51:49.205: INFO: Pod "pod-projected-configmaps-cfae7c82-6df8-41e2-8ab4-3aa685e60dde" satisfied condition "success or failure"
Feb 17 13:51:49.210: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-cfae7c82-6df8-41e2-8ab4-3aa685e60dde container projected-configmap-volume-test: 
STEP: delete the pod
Feb 17 13:51:49.335: INFO: Waiting for pod pod-projected-configmaps-cfae7c82-6df8-41e2-8ab4-3aa685e60dde to disappear
Feb 17 13:51:49.343: INFO: Pod pod-projected-configmaps-cfae7c82-6df8-41e2-8ab4-3aa685e60dde no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:51:49.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3079" for this suite.
Feb 17 13:51:55.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:51:55.549: INFO: namespace projected-3079 deletion completed in 6.200459499s

• [SLOW TEST:14.527 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:51:55.549: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb 17 13:51:55.615: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:52:13.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9779" for this suite.
Feb 17 13:52:35.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:52:35.270: INFO: namespace init-container-9779 deletion completed in 22.139686656s

• [SLOW TEST:39.721 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:52:35.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb 17 13:52:35.352: INFO: PodSpec: initContainers in spec.initContainers
Feb 17 13:53:38.415: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-cfb322dd-3094-4d84-a28e-0d96d5781821", GenerateName:"", Namespace:"init-container-3867", SelfLink:"/api/v1/namespaces/init-container-3867/pods/pod-init-cfb322dd-3094-4d84-a28e-0d96d5781821", UID:"7a7fa733-4c3c-4898-8fea-d4705f224b29", ResourceVersion:"24703840", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717544355, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"352856966"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-zqp8c", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00087ca80), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-zqp8c", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-zqp8c", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-zqp8c", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000fca218), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0025be300), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000fca320)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000fca350)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000fca358), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000fca35c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717544355, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717544355, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717544355, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717544355, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc002764400), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0015cc0e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0015cc150)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://8bc246f48132b78ed8940a037b8c0f1b89db0d3723fa411d07d7e6464fa3ee50"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002764520), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002764480), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:53:38.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3867" for this suite.
Feb 17 13:54:00.593: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:54:00.715: INFO: namespace init-container-3867 deletion completed in 22.186608937s

• [SLOW TEST:85.445 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:54:00.716: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Feb 17 13:54:00.844: INFO: Pod name pod-release: Found 0 pods out of 1
Feb 17 13:54:05.851: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:54:07.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4453" for this suite.
Feb 17 13:54:13.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:54:13.231: INFO: namespace replication-controller-4453 deletion completed in 6.204517716s

• [SLOW TEST:12.515 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:54:13.231: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 17 13:54:13.449: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a9664fe6-3178-4347-a6dc-cfa3113ff059" in namespace "projected-2916" to be "success or failure"
Feb 17 13:54:13.464: INFO: Pod "downwardapi-volume-a9664fe6-3178-4347-a6dc-cfa3113ff059": Phase="Pending", Reason="", readiness=false. Elapsed: 15.47904ms
Feb 17 13:54:15.478: INFO: Pod "downwardapi-volume-a9664fe6-3178-4347-a6dc-cfa3113ff059": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029501918s
Feb 17 13:54:17.485: INFO: Pod "downwardapi-volume-a9664fe6-3178-4347-a6dc-cfa3113ff059": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035935636s
Feb 17 13:54:19.497: INFO: Pod "downwardapi-volume-a9664fe6-3178-4347-a6dc-cfa3113ff059": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048166907s
Feb 17 13:54:21.525: INFO: Pod "downwardapi-volume-a9664fe6-3178-4347-a6dc-cfa3113ff059": Phase="Pending", Reason="", readiness=false. Elapsed: 8.075806719s
Feb 17 13:54:23.560: INFO: Pod "downwardapi-volume-a9664fe6-3178-4347-a6dc-cfa3113ff059": Phase="Pending", Reason="", readiness=false. Elapsed: 10.111339286s
Feb 17 13:54:25.575: INFO: Pod "downwardapi-volume-a9664fe6-3178-4347-a6dc-cfa3113ff059": Phase="Pending", Reason="", readiness=false. Elapsed: 12.125796102s
Feb 17 13:54:27.582: INFO: Pod "downwardapi-volume-a9664fe6-3178-4347-a6dc-cfa3113ff059": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.132929379s
STEP: Saw pod success
Feb 17 13:54:27.582: INFO: Pod "downwardapi-volume-a9664fe6-3178-4347-a6dc-cfa3113ff059" satisfied condition "success or failure"
Feb 17 13:54:27.585: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a9664fe6-3178-4347-a6dc-cfa3113ff059 container client-container: 
STEP: delete the pod
Feb 17 13:54:27.726: INFO: Waiting for pod downwardapi-volume-a9664fe6-3178-4347-a6dc-cfa3113ff059 to disappear
Feb 17 13:54:27.733: INFO: Pod downwardapi-volume-a9664fe6-3178-4347-a6dc-cfa3113ff059 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:54:27.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2916" for this suite.
Feb 17 13:54:33.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:54:33.930: INFO: namespace projected-2916 deletion completed in 6.186460875s

• [SLOW TEST:20.699 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:54:33.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-eddb57b5-e795-4d3b-af59-b50996b8e29f
STEP: Creating a pod to test consume configMaps
Feb 17 13:54:34.082: INFO: Waiting up to 5m0s for pod "pod-configmaps-74657434-0027-4238-9af1-4aba08c32842" in namespace "configmap-3582" to be "success or failure"
Feb 17 13:54:34.088: INFO: Pod "pod-configmaps-74657434-0027-4238-9af1-4aba08c32842": Phase="Pending", Reason="", readiness=false. Elapsed: 5.079393ms
Feb 17 13:54:36.101: INFO: Pod "pod-configmaps-74657434-0027-4238-9af1-4aba08c32842": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018691754s
Feb 17 13:54:38.108: INFO: Pod "pod-configmaps-74657434-0027-4238-9af1-4aba08c32842": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025336715s
Feb 17 13:54:40.115: INFO: Pod "pod-configmaps-74657434-0027-4238-9af1-4aba08c32842": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032522584s
Feb 17 13:54:42.121: INFO: Pod "pod-configmaps-74657434-0027-4238-9af1-4aba08c32842": Phase="Pending", Reason="", readiness=false. Elapsed: 8.038762064s
Feb 17 13:54:44.132: INFO: Pod "pod-configmaps-74657434-0027-4238-9af1-4aba08c32842": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.048987047s
STEP: Saw pod success
Feb 17 13:54:44.132: INFO: Pod "pod-configmaps-74657434-0027-4238-9af1-4aba08c32842" satisfied condition "success or failure"
Feb 17 13:54:44.137: INFO: Trying to get logs from node iruya-node pod pod-configmaps-74657434-0027-4238-9af1-4aba08c32842 container configmap-volume-test: 
STEP: delete the pod
Feb 17 13:54:44.188: INFO: Waiting for pod pod-configmaps-74657434-0027-4238-9af1-4aba08c32842 to disappear
Feb 17 13:54:44.206: INFO: Pod pod-configmaps-74657434-0027-4238-9af1-4aba08c32842 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:54:44.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3582" for this suite.
Feb 17 13:54:50.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:54:50.396: INFO: namespace configmap-3582 deletion completed in 6.184488979s

• [SLOW TEST:16.466 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:54:50.397: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-8055
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 17 13:54:50.488: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 17 13:55:30.710: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-8055 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 17 13:55:30.711: INFO: >>> kubeConfig: /root/.kube/config
I0217 13:55:30.810605       8 log.go:172] (0xc002b831e0) (0xc0022388c0) Create stream
I0217 13:55:30.810676       8 log.go:172] (0xc002b831e0) (0xc0022388c0) Stream added, broadcasting: 1
I0217 13:55:30.817061       8 log.go:172] (0xc002b831e0) Reply frame received for 1
I0217 13:55:30.817141       8 log.go:172] (0xc002b831e0) (0xc001c32460) Create stream
I0217 13:55:30.817158       8 log.go:172] (0xc002b831e0) (0xc001c32460) Stream added, broadcasting: 3
I0217 13:55:30.819573       8 log.go:172] (0xc002b831e0) Reply frame received for 3
I0217 13:55:30.819596       8 log.go:172] (0xc002b831e0) (0xc002238960) Create stream
I0217 13:55:30.819603       8 log.go:172] (0xc002b831e0) (0xc002238960) Stream added, broadcasting: 5
I0217 13:55:30.822827       8 log.go:172] (0xc002b831e0) Reply frame received for 5
I0217 13:55:31.050295       8 log.go:172] (0xc002b831e0) Data frame received for 3
I0217 13:55:31.050379       8 log.go:172] (0xc001c32460) (3) Data frame handling
I0217 13:55:31.050407       8 log.go:172] (0xc001c32460) (3) Data frame sent
I0217 13:55:31.189083       8 log.go:172] (0xc002b831e0) Data frame received for 1
I0217 13:55:31.189328       8 log.go:172] (0xc0022388c0) (1) Data frame handling
I0217 13:55:31.189351       8 log.go:172] (0xc0022388c0) (1) Data frame sent
I0217 13:55:31.190832       8 log.go:172] (0xc002b831e0) (0xc0022388c0) Stream removed, broadcasting: 1
I0217 13:55:31.191349       8 log.go:172] (0xc002b831e0) (0xc001c32460) Stream removed, broadcasting: 3
I0217 13:55:31.191970       8 log.go:172] (0xc002b831e0) (0xc002238960) Stream removed, broadcasting: 5
I0217 13:55:31.192076       8 log.go:172] (0xc002b831e0) (0xc0022388c0) Stream removed, broadcasting: 1
I0217 13:55:31.192125       8 log.go:172] (0xc002b831e0) (0xc001c32460) Stream removed, broadcasting: 3
I0217 13:55:31.192208       8 log.go:172] (0xc002b831e0) (0xc002238960) Stream removed, broadcasting: 5
Feb 17 13:55:31.192: INFO: Waiting for endpoints: map[]
I0217 13:55:31.192472       8 log.go:172] (0xc002b831e0) Go away received
Feb 17 13:55:31.232: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-8055 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 17 13:55:31.232: INFO: >>> kubeConfig: /root/.kube/config
I0217 13:55:31.309994       8 log.go:172] (0xc000a958c0) (0xc001c328c0) Create stream
I0217 13:55:31.310034       8 log.go:172] (0xc000a958c0) (0xc001c328c0) Stream added, broadcasting: 1
I0217 13:55:31.316895       8 log.go:172] (0xc000a958c0) Reply frame received for 1
I0217 13:55:31.316951       8 log.go:172] (0xc000a958c0) (0xc002edefa0) Create stream
I0217 13:55:31.316972       8 log.go:172] (0xc000a958c0) (0xc002edefa0) Stream added, broadcasting: 3
I0217 13:55:31.318829       8 log.go:172] (0xc000a958c0) Reply frame received for 3
I0217 13:55:31.318892       8 log.go:172] (0xc000a958c0) (0xc0003a19a0) Create stream
I0217 13:55:31.318902       8 log.go:172] (0xc000a958c0) (0xc0003a19a0) Stream added, broadcasting: 5
I0217 13:55:31.319981       8 log.go:172] (0xc000a958c0) Reply frame received for 5
I0217 13:55:31.447362       8 log.go:172] (0xc000a958c0) Data frame received for 3
I0217 13:55:31.447404       8 log.go:172] (0xc002edefa0) (3) Data frame handling
I0217 13:55:31.447427       8 log.go:172] (0xc002edefa0) (3) Data frame sent
I0217 13:55:31.563998       8 log.go:172] (0xc000a958c0) Data frame received for 1
I0217 13:55:31.564189       8 log.go:172] (0xc000a958c0) (0xc002edefa0) Stream removed, broadcasting: 3
I0217 13:55:31.564285       8 log.go:172] (0xc001c328c0) (1) Data frame handling
I0217 13:55:31.564496       8 log.go:172] (0xc001c328c0) (1) Data frame sent
I0217 13:55:31.564510       8 log.go:172] (0xc000a958c0) (0xc0003a19a0) Stream removed, broadcasting: 5
I0217 13:55:31.564547       8 log.go:172] (0xc000a958c0) (0xc001c328c0) Stream removed, broadcasting: 1
I0217 13:55:31.564565       8 log.go:172] (0xc000a958c0) Go away received
I0217 13:55:31.564802       8 log.go:172] (0xc000a958c0) (0xc001c328c0) Stream removed, broadcasting: 1
I0217 13:55:31.564840       8 log.go:172] (0xc000a958c0) (0xc002edefa0) Stream removed, broadcasting: 3
I0217 13:55:31.564861       8 log.go:172] (0xc000a958c0) (0xc0003a19a0) Stream removed, broadcasting: 5
Feb 17 13:55:31.564: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:55:31.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8055" for this suite.
Feb 17 13:55:55.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:55:55.727: INFO: namespace pod-network-test-8055 deletion completed in 24.15186076s

• [SLOW TEST:65.330 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:55:55.728: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-420c2dab-a984-4611-998b-69254c092d8b
STEP: Creating a pod to test consume secrets
Feb 17 13:55:55.852: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e656c735-ea10-4ed8-80d5-3c7163641abb" in namespace "projected-2544" to be "success or failure"
Feb 17 13:55:55.883: INFO: Pod "pod-projected-secrets-e656c735-ea10-4ed8-80d5-3c7163641abb": Phase="Pending", Reason="", readiness=false. Elapsed: 30.160812ms
Feb 17 13:55:57.900: INFO: Pod "pod-projected-secrets-e656c735-ea10-4ed8-80d5-3c7163641abb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047855834s
Feb 17 13:55:59.908: INFO: Pod "pod-projected-secrets-e656c735-ea10-4ed8-80d5-3c7163641abb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055029662s
Feb 17 13:56:01.914: INFO: Pod "pod-projected-secrets-e656c735-ea10-4ed8-80d5-3c7163641abb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061271358s
Feb 17 13:56:03.930: INFO: Pod "pod-projected-secrets-e656c735-ea10-4ed8-80d5-3c7163641abb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.077880903s
Feb 17 13:56:05.959: INFO: Pod "pod-projected-secrets-e656c735-ea10-4ed8-80d5-3c7163641abb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.106479664s
STEP: Saw pod success
Feb 17 13:56:05.959: INFO: Pod "pod-projected-secrets-e656c735-ea10-4ed8-80d5-3c7163641abb" satisfied condition "success or failure"
Feb 17 13:56:05.964: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-e656c735-ea10-4ed8-80d5-3c7163641abb container projected-secret-volume-test: 
STEP: delete the pod
Feb 17 13:56:06.066: INFO: Waiting for pod pod-projected-secrets-e656c735-ea10-4ed8-80d5-3c7163641abb to disappear
Feb 17 13:56:06.074: INFO: Pod pod-projected-secrets-e656c735-ea10-4ed8-80d5-3c7163641abb no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:56:06.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2544" for this suite.
Feb 17 13:56:12.138: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:56:12.269: INFO: namespace projected-2544 deletion completed in 6.189502186s

• [SLOW TEST:16.542 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:56:12.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-8dfb
STEP: Creating a pod to test atomic-volume-subpath
Feb 17 13:56:13.176: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-8dfb" in namespace "subpath-71" to be "success or failure"
Feb 17 13:56:13.183: INFO: Pod "pod-subpath-test-configmap-8dfb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.560997ms
Feb 17 13:56:15.198: INFO: Pod "pod-subpath-test-configmap-8dfb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021728024s
Feb 17 13:56:17.248: INFO: Pod "pod-subpath-test-configmap-8dfb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072390501s
Feb 17 13:56:19.254: INFO: Pod "pod-subpath-test-configmap-8dfb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078481481s
Feb 17 13:56:21.908: INFO: Pod "pod-subpath-test-configmap-8dfb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.73231694s
Feb 17 13:56:23.921: INFO: Pod "pod-subpath-test-configmap-8dfb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.744807987s
Feb 17 13:56:25.931: INFO: Pod "pod-subpath-test-configmap-8dfb": Phase="Running", Reason="", readiness=true. Elapsed: 12.754539932s
Feb 17 13:56:27.939: INFO: Pod "pod-subpath-test-configmap-8dfb": Phase="Running", Reason="", readiness=true. Elapsed: 14.762879495s
Feb 17 13:56:29.950: INFO: Pod "pod-subpath-test-configmap-8dfb": Phase="Running", Reason="", readiness=true. Elapsed: 16.77353507s
Feb 17 13:56:31.961: INFO: Pod "pod-subpath-test-configmap-8dfb": Phase="Running", Reason="", readiness=true. Elapsed: 18.784982994s
Feb 17 13:56:33.971: INFO: Pod "pod-subpath-test-configmap-8dfb": Phase="Running", Reason="", readiness=true. Elapsed: 20.795277096s
Feb 17 13:56:35.986: INFO: Pod "pod-subpath-test-configmap-8dfb": Phase="Running", Reason="", readiness=true. Elapsed: 22.809882785s
Feb 17 13:56:37.993: INFO: Pod "pod-subpath-test-configmap-8dfb": Phase="Running", Reason="", readiness=true. Elapsed: 24.817288439s
Feb 17 13:56:40.001: INFO: Pod "pod-subpath-test-configmap-8dfb": Phase="Running", Reason="", readiness=true. Elapsed: 26.824673287s
Feb 17 13:56:42.010: INFO: Pod "pod-subpath-test-configmap-8dfb": Phase="Running", Reason="", readiness=true. Elapsed: 28.834299046s
Feb 17 13:56:44.018: INFO: Pod "pod-subpath-test-configmap-8dfb": Phase="Running", Reason="", readiness=true. Elapsed: 30.841720431s
Feb 17 13:56:46.028: INFO: Pod "pod-subpath-test-configmap-8dfb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.852040919s
STEP: Saw pod success
Feb 17 13:56:46.028: INFO: Pod "pod-subpath-test-configmap-8dfb" satisfied condition "success or failure"
Feb 17 13:56:46.035: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-8dfb container test-container-subpath-configmap-8dfb: 
STEP: delete the pod
Feb 17 13:56:46.105: INFO: Waiting for pod pod-subpath-test-configmap-8dfb to disappear
Feb 17 13:56:46.196: INFO: Pod pod-subpath-test-configmap-8dfb no longer exists
STEP: Deleting pod pod-subpath-test-configmap-8dfb
Feb 17 13:56:46.196: INFO: Deleting pod "pod-subpath-test-configmap-8dfb" in namespace "subpath-71"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 13:56:46.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-71" for this suite.
Feb 17 13:56:52.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:56:52.389: INFO: namespace subpath-71 deletion completed in 6.1700223s

• [SLOW TEST:40.119 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 13:56:52.389: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-329
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-329
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-329
Feb 17 13:56:52.601: INFO: Found 0 stateful pods, waiting for 1
Feb 17 13:57:02.610: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Feb 17 13:57:02.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 17 13:57:05.461: INFO: stderr: "I0217 13:57:04.865836    1387 log.go:172] (0xc000104f20) (0xc0005f08c0) Create stream\nI0217 13:57:04.866036    1387 log.go:172] (0xc000104f20) (0xc0005f08c0) Stream added, broadcasting: 1\nI0217 13:57:04.872582    1387 log.go:172] (0xc000104f20) Reply frame received for 1\nI0217 13:57:04.872609    1387 log.go:172] (0xc000104f20) (0xc0007000a0) Create stream\nI0217 13:57:04.872621    1387 log.go:172] (0xc000104f20) (0xc0007000a0) Stream added, broadcasting: 3\nI0217 13:57:04.873798    1387 log.go:172] (0xc000104f20) Reply frame received for 3\nI0217 13:57:04.873818    1387 log.go:172] (0xc000104f20) (0xc000a86000) Create stream\nI0217 13:57:04.873828    1387 log.go:172] (0xc000104f20) (0xc000a86000) Stream added, broadcasting: 5\nI0217 13:57:04.875329    1387 log.go:172] (0xc000104f20) Reply frame received for 5\nI0217 13:57:05.131229    1387 log.go:172] (0xc000104f20) Data frame received for 5\nI0217 13:57:05.131280    1387 log.go:172] (0xc000a86000) (5) Data frame handling\nI0217 13:57:05.131307    1387 log.go:172] (0xc000a86000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0217 13:57:05.268930    1387 log.go:172] (0xc000104f20) Data frame received for 3\nI0217 13:57:05.268985    1387 log.go:172] (0xc0007000a0) (3) Data frame handling\nI0217 13:57:05.269198    1387 log.go:172] (0xc0007000a0) (3) Data frame sent\nI0217 13:57:05.451971    1387 log.go:172] (0xc000104f20) Data frame received for 1\nI0217 13:57:05.452229    1387 log.go:172] (0xc000104f20) (0xc0007000a0) Stream removed, broadcasting: 3\nI0217 13:57:05.452309    1387 log.go:172] (0xc000104f20) (0xc000a86000) Stream removed, broadcasting: 5\nI0217 13:57:05.452355    1387 log.go:172] (0xc0005f08c0) (1) Data frame handling\nI0217 13:57:05.452397    1387 log.go:172] (0xc0005f08c0) (1) Data frame sent\nI0217 13:57:05.452422    1387 log.go:172] (0xc000104f20) (0xc0005f08c0) Stream removed, broadcasting: 1\nI0217 13:57:05.452448    1387 log.go:172] (0xc000104f20) Go away received\nI0217 13:57:05.452760    1387 log.go:172] (0xc000104f20) (0xc0005f08c0) Stream removed, broadcasting: 1\nI0217 13:57:05.452787    1387 log.go:172] (0xc000104f20) (0xc0007000a0) Stream removed, broadcasting: 3\nI0217 13:57:05.452801    1387 log.go:172] (0xc000104f20) (0xc000a86000) Stream removed, broadcasting: 5\n"
Feb 17 13:57:05.461: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 17 13:57:05.461: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 17 13:57:05.475: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb 17 13:57:15.483: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 17 13:57:15.483: INFO: Waiting for statefulset status.replicas updated to 0
Feb 17 13:57:15.567: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb 17 13:57:15.567: INFO: ss-0  iruya-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:56:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:56:52 +0000 UTC  }]
Feb 17 13:57:15.567: INFO: 
Feb 17 13:57:15.567: INFO: StatefulSet ss has not reached scale 3, at 1
Feb 17 13:57:17.318: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.953224767s
Feb 17 13:57:18.736: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.20265567s
Feb 17 13:57:19.745: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.784571195s
Feb 17 13:57:20.764: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.77557686s
Feb 17 13:57:22.268: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.756070886s
Feb 17 13:57:23.305: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.252181107s
Feb 17 13:57:24.312: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.215625556s
Feb 17 13:57:25.372: INFO: Verifying statefulset ss doesn't scale past 3 for another 208.4631ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-329
Feb 17 13:57:26.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 17 13:57:27.066: INFO: stderr: "I0217 13:57:26.790920    1416 log.go:172] (0xc0009c62c0) (0xc00099c5a0) Create stream\nI0217 13:57:26.791126    1416 log.go:172] (0xc0009c62c0) (0xc00099c5a0) Stream added, broadcasting: 1\nI0217 13:57:26.795350    1416 log.go:172] (0xc0009c62c0) Reply frame received for 1\nI0217 13:57:26.795372    1416 log.go:172] (0xc0009c62c0) (0xc0006443c0) Create stream\nI0217 13:57:26.795380    1416 log.go:172] (0xc0009c62c0) (0xc0006443c0) Stream added, broadcasting: 3\nI0217 13:57:26.796299    1416 log.go:172] (0xc0009c62c0) Reply frame received for 3\nI0217 13:57:26.796314    1416 log.go:172] (0xc0009c62c0) (0xc000644460) Create stream\nI0217 13:57:26.796319    1416 log.go:172] (0xc0009c62c0) (0xc000644460) Stream added, broadcasting: 5\nI0217 13:57:26.797387    1416 log.go:172] (0xc0009c62c0) Reply frame received for 5\nI0217 13:57:26.915783    1416 log.go:172] (0xc0009c62c0) Data frame received for 5\nI0217 13:57:26.915899    1416 log.go:172] (0xc000644460) (5) Data frame handling\nI0217 13:57:26.915911    1416 log.go:172] (0xc000644460) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0217 13:57:26.915935    1416 log.go:172] (0xc0009c62c0) Data frame received for 3\nI0217 13:57:26.915941    1416 log.go:172] (0xc0006443c0) (3) Data frame handling\nI0217 13:57:26.915949    1416 log.go:172] (0xc0006443c0) (3) Data frame sent\nI0217 13:57:27.058376    1416 log.go:172] (0xc0009c62c0) Data frame received for 1\nI0217 13:57:27.058702    1416 log.go:172] (0xc0009c62c0) (0xc000644460) Stream removed, broadcasting: 5\nI0217 13:57:27.058756    1416 log.go:172] (0xc00099c5a0) (1) Data frame handling\nI0217 13:57:27.058783    1416 log.go:172] (0xc0009c62c0) (0xc0006443c0) Stream removed, broadcasting: 3\nI0217 13:57:27.058842    1416 log.go:172] (0xc00099c5a0) (1) Data frame sent\nI0217 13:57:27.058862    1416 log.go:172] (0xc0009c62c0) (0xc00099c5a0) Stream removed, broadcasting: 1\nI0217 13:57:27.058881    1416 log.go:172] (0xc0009c62c0) Go away received\nI0217 13:57:27.059630    1416 log.go:172] (0xc0009c62c0) (0xc00099c5a0) Stream removed, broadcasting: 1\nI0217 13:57:27.059646    1416 log.go:172] (0xc0009c62c0) (0xc0006443c0) Stream removed, broadcasting: 3\nI0217 13:57:27.059652    1416 log.go:172] (0xc0009c62c0) (0xc000644460) Stream removed, broadcasting: 5\n"
Feb 17 13:57:27.066: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 17 13:57:27.066: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 17 13:57:27.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 17 13:57:27.513: INFO: stderr: "I0217 13:57:27.268974    1432 log.go:172] (0xc00094a0b0) (0xc00097e5a0) Create stream\nI0217 13:57:27.269111    1432 log.go:172] (0xc00094a0b0) (0xc00097e5a0) Stream added, broadcasting: 1\nI0217 13:57:27.273409    1432 log.go:172] (0xc00094a0b0) Reply frame received for 1\nI0217 13:57:27.273438    1432 log.go:172] (0xc00094a0b0) (0xc0005be140) Create stream\nI0217 13:57:27.273451    1432 log.go:172] (0xc00094a0b0) (0xc0005be140) Stream added, broadcasting: 3\nI0217 13:57:27.274441    1432 log.go:172] (0xc00094a0b0) Reply frame received for 3\nI0217 13:57:27.274464    1432 log.go:172] (0xc00094a0b0) (0xc00028e000) Create stream\nI0217 13:57:27.274473    1432 log.go:172] (0xc00094a0b0) (0xc00028e000) Stream added, broadcasting: 5\nI0217 13:57:27.275272    1432 log.go:172] (0xc00094a0b0) Reply frame received for 5\nI0217 13:57:27.395603    1432 log.go:172] (0xc00094a0b0) Data frame received for 5\nI0217 13:57:27.395663    1432 log.go:172] (0xc00028e000) (5) Data frame handling\nI0217 13:57:27.395697    1432 log.go:172] (0xc00028e000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0217 13:57:27.395745    1432 log.go:172] (0xc00094a0b0) Data frame received for 3\nI0217 13:57:27.395753    1432 log.go:172] (0xc0005be140) (3) Data frame handling\nI0217 13:57:27.395787    1432 log.go:172] (0xc0005be140) (3) Data frame sent\nI0217 13:57:27.503239    1432 log.go:172] (0xc00094a0b0) (0xc0005be140) Stream removed, broadcasting: 3\nI0217 13:57:27.503566    1432 log.go:172] (0xc00094a0b0) Data frame received for 1\nI0217 13:57:27.503580    1432 log.go:172] (0xc00097e5a0) (1) Data frame handling\nI0217 13:57:27.503592    1432 log.go:172] (0xc00097e5a0) (1) Data frame sent\nI0217 13:57:27.503601    1432 log.go:172] (0xc00094a0b0) (0xc00097e5a0) Stream removed, broadcasting: 1\nI0217 13:57:27.503646    1432 log.go:172] (0xc00094a0b0) (0xc00028e000) Stream removed, broadcasting: 5\nI0217 13:57:27.503685    1432 log.go:172] (0xc00094a0b0) Go away received\nI0217 13:57:27.503990    1432 log.go:172] (0xc00094a0b0) (0xc00097e5a0) Stream removed, broadcasting: 1\nI0217 13:57:27.504121    1432 log.go:172] (0xc00094a0b0) (0xc0005be140) Stream removed, broadcasting: 3\nI0217 13:57:27.504158    1432 log.go:172] (0xc00094a0b0) (0xc00028e000) Stream removed, broadcasting: 5\n"
Feb 17 13:57:27.514: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 17 13:57:27.514: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 17 13:57:27.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 17 13:57:28.043: INFO: stderr: "I0217 13:57:27.758040    1450 log.go:172] (0xc0009ac0b0) (0xc00066e960) Create stream\nI0217 13:57:27.758178    1450 log.go:172] (0xc0009ac0b0) (0xc00066e960) Stream added, broadcasting: 1\nI0217 13:57:27.763897    1450 log.go:172] (0xc0009ac0b0) Reply frame received for 1\nI0217 13:57:27.763931    1450 log.go:172] (0xc0009ac0b0) (0xc000862000) Create stream\nI0217 13:57:27.763938    1450 log.go:172] (0xc0009ac0b0) (0xc000862000) Stream added, broadcasting: 3\nI0217 13:57:27.766065    1450 log.go:172] (0xc0009ac0b0) Reply frame received for 3\nI0217 13:57:27.766110    1450 log.go:172] (0xc0009ac0b0) (0xc0003c2000) Create stream\nI0217 13:57:27.766117    1450 log.go:172] (0xc0009ac0b0) (0xc0003c2000) Stream added, broadcasting: 5\nI0217 13:57:27.767666    1450 log.go:172] (0xc0009ac0b0) Reply frame received for 5\nI0217 13:57:27.869506    1450 log.go:172] (0xc0009ac0b0) Data frame received for 5\nI0217 13:57:27.869638    1450 log.go:172] (0xc0003c2000) (5) Data frame handling\nI0217 13:57:27.869682    1450 log.go:172] (0xc0003c2000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0217 13:57:27.872298    1450 log.go:172] (0xc0009ac0b0) Data frame received for 3\nI0217 13:57:27.872330    1450 log.go:172] (0xc000862000) (3) Data frame handling\nI0217 13:57:27.872351    1450 log.go:172] (0xc000862000) (3) Data frame sent\nI0217 13:57:28.035792    1450 log.go:172] (0xc0009ac0b0) (0xc000862000) Stream removed, broadcasting: 3\nI0217 13:57:28.036026    1450 log.go:172] (0xc0009ac0b0) Data frame received for 1\nI0217 13:57:28.036072    1450 log.go:172] (0xc00066e960) (1) Data frame handling\nI0217 13:57:28.036145    1450 log.go:172] (0xc00066e960) (1) Data frame sent\nI0217 13:57:28.036324    1450 log.go:172] (0xc0009ac0b0) (0xc00066e960) Stream removed, broadcasting: 1\nI0217 13:57:28.036381    1450 log.go:172] (0xc0009ac0b0) (0xc0003c2000) Stream removed, broadcasting: 5\nI0217 13:57:28.036512    1450 log.go:172] (0xc0009ac0b0) Go away received\nI0217 13:57:28.036860    1450 log.go:172] (0xc0009ac0b0) (0xc00066e960) Stream removed, broadcasting: 1\nI0217 13:57:28.036927    1450 log.go:172] (0xc0009ac0b0) (0xc000862000) Stream removed, broadcasting: 3\nI0217 13:57:28.036949    1450 log.go:172] (0xc0009ac0b0) (0xc0003c2000) Stream removed, broadcasting: 5\n"
Feb 17 13:57:28.043: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 17 13:57:28.043: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 17 13:57:28.051: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 17 13:57:28.051: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 17 13:57:28.051: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Feb 17 13:57:28.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 17 13:57:28.468: INFO: stderr: "I0217 13:57:28.197190    1470 log.go:172] (0xc0009a6630) (0xc0009888c0) Create stream\nI0217 13:57:28.197245    1470 log.go:172] (0xc0009a6630) (0xc0009888c0) Stream added, broadcasting: 1\nI0217 13:57:28.204067    1470 log.go:172] (0xc0009a6630) Reply frame received for 1\nI0217 13:57:28.204096    1470 log.go:172] (0xc0009a6630) (0xc00050e5a0) Create stream\nI0217 13:57:28.204104    1470 log.go:172] (0xc0009a6630) (0xc00050e5a0) Stream added, broadcasting: 3\nI0217 13:57:28.205145    1470 log.go:172] (0xc0009a6630) Reply frame received for 3\nI0217 13:57:28.205165    1470 log.go:172] (0xc0009a6630) (0xc000988000) Create stream\nI0217 13:57:28.205170    1470 log.go:172] (0xc0009a6630) (0xc000988000) Stream added, broadcasting: 5\nI0217 13:57:28.206399    1470 log.go:172] (0xc0009a6630) Reply frame received for 5\nI0217 13:57:28.312115    1470 log.go:172] (0xc0009a6630) Data frame received for 5\nI0217 13:57:28.312172    1470 log.go:172] (0xc000988000) (5) Data frame handling\nI0217 13:57:28.312188    1470 log.go:172] (0xc000988000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0217 13:57:28.312205    1470 log.go:172] (0xc0009a6630) Data frame received for 3\nI0217 13:57:28.312218    1470 log.go:172] (0xc00050e5a0) (3) Data frame handling\nI0217 13:57:28.312231    1470 log.go:172] (0xc00050e5a0) (3) Data frame sent\nI0217 13:57:28.459602    1470 log.go:172] (0xc0009a6630) (0xc00050e5a0) Stream removed, broadcasting: 3\nI0217 13:57:28.459739    1470 log.go:172] (0xc0009a6630) Data frame received for 1\nI0217 13:57:28.459762    1470 log.go:172] (0xc0009888c0) (1) Data frame handling\nI0217 13:57:28.459779    1470 log.go:172] (0xc0009888c0) (1) Data frame sent\nI0217 13:57:28.459823    1470 log.go:172] (0xc0009a6630) (0xc000988000) Stream removed, broadcasting: 5\nI0217 13:57:28.459862    1470 log.go:172] (0xc0009a6630) (0xc0009888c0) Stream removed, broadcasting: 1\nI0217 13:57:28.459878    1470 log.go:172] (0xc0009a6630) Go away received\nI0217 13:57:28.460419    1470 log.go:172] (0xc0009a6630) (0xc0009888c0) Stream removed, broadcasting: 1\nI0217 13:57:28.460442    1470 log.go:172] (0xc0009a6630) (0xc00050e5a0) Stream removed, broadcasting: 3\nI0217 13:57:28.460456    1470 log.go:172] (0xc0009a6630) (0xc000988000) Stream removed, broadcasting: 5\n"
Feb 17 13:57:28.468: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 17 13:57:28.468: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 17 13:57:28.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 17 13:57:28.948: INFO: stderr: "I0217 13:57:28.701345    1490 log.go:172] (0xc000116c60) (0xc0008b8780) Create stream\nI0217 13:57:28.701508    1490 log.go:172] (0xc000116c60) (0xc0008b8780) Stream added, broadcasting: 1\nI0217 13:57:28.707400    1490 log.go:172] (0xc000116c60) Reply frame received for 1\nI0217 13:57:28.707440    1490 log.go:172] (0xc000116c60) (0xc000352320) Create stream\nI0217 13:57:28.707447    1490 log.go:172] (0xc000116c60) (0xc000352320) Stream added, broadcasting: 3\nI0217 13:57:28.708438    1490 log.go:172] (0xc000116c60) Reply frame received for 3\nI0217 13:57:28.708517    1490 log.go:172] (0xc000116c60) (0xc0008b8820) Create stream\nI0217 13:57:28.708540    1490 log.go:172] (0xc000116c60) (0xc0008b8820) Stream added, broadcasting: 5\nI0217 13:57:28.709942    1490 log.go:172] (0xc000116c60) Reply frame received for 5\nI0217 13:57:28.822402    1490 log.go:172] (0xc000116c60) Data frame received for 5\nI0217 13:57:28.822485    1490 log.go:172] (0xc0008b8820) (5) Data frame handling\nI0217 13:57:28.822497    1490 log.go:172] (0xc0008b8820) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0217 13:57:28.859981    1490 log.go:172] (0xc000116c60) Data frame received for 3\nI0217 13:57:28.860040    1490 log.go:172] (0xc000352320) (3) Data frame handling\nI0217 13:57:28.860053    1490 log.go:172] (0xc000352320) (3) Data frame sent\nI0217 13:57:28.941208    1490 log.go:172] (0xc000116c60) Data frame received for 1\nI0217 13:57:28.941239    1490 log.go:172] (0xc0008b8780) (1) Data frame handling\nI0217 13:57:28.941249    1490 log.go:172] (0xc0008b8780) (1) Data frame sent\nI0217 13:57:28.941369    1490 log.go:172] (0xc000116c60) (0xc0008b8780) Stream removed, broadcasting: 1\nI0217 13:57:28.941777    1490 log.go:172] (0xc000116c60) (0xc000352320) Stream removed, broadcasting: 3\nI0217 13:57:28.941850    1490 log.go:172] (0xc000116c60) (0xc0008b8820) Stream removed, broadcasting: 5\nI0217 13:57:28.941865    1490 log.go:172] (0xc000116c60) Go away received\nI0217 13:57:28.942112    1490 log.go:172] (0xc000116c60) (0xc0008b8780) Stream removed, broadcasting: 1\nI0217 13:57:28.942124    1490 log.go:172] (0xc000116c60) (0xc000352320) Stream removed, broadcasting: 3\nI0217 13:57:28.942134    1490 log.go:172] (0xc000116c60) (0xc0008b8820) Stream removed, broadcasting: 5\n"
Feb 17 13:57:28.948: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 17 13:57:28.948: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 17 13:57:28.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 17 13:57:29.324: INFO: stderr: "I0217 13:57:29.066657    1505 log.go:172] (0xc0007e8370) (0xc0005f6820) Create stream\nI0217 13:57:29.066745    1505 log.go:172] (0xc0007e8370) (0xc0005f6820) Stream added, broadcasting: 1\nI0217 13:57:29.070417    1505 log.go:172] (0xc0007e8370) Reply frame received for 1\nI0217 13:57:29.070438    1505 log.go:172] (0xc0007e8370) (0xc000944000) Create stream\nI0217 13:57:29.070445    1505 log.go:172] (0xc0007e8370) (0xc000944000) Stream added, broadcasting: 3\nI0217 13:57:29.071505    1505 log.go:172] (0xc0007e8370) Reply frame received for 3\nI0217 13:57:29.071523    1505 log.go:172] (0xc0007e8370) (0xc00087c000) Create stream\nI0217 13:57:29.071530    1505 log.go:172] (0xc0007e8370) (0xc00087c000) Stream added, broadcasting: 5\nI0217 13:57:29.072296    1505 log.go:172] (0xc0007e8370) Reply frame received for 5\nI0217 13:57:29.161148    1505 log.go:172] (0xc0007e8370) Data frame received for 5\nI0217 13:57:29.161184    1505 log.go:172] (0xc00087c000) (5) Data frame handling\nI0217 13:57:29.161194    1505 log.go:172] (0xc00087c000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0217 13:57:29.196478    1505 log.go:172] (0xc0007e8370) Data frame received for 3\nI0217 13:57:29.196529    1505 log.go:172] (0xc000944000) (3) Data frame handling\nI0217 13:57:29.196552    1505 log.go:172] (0xc000944000) (3) Data frame sent\nI0217 13:57:29.315413    1505 log.go:172] (0xc0007e8370) Data frame received for 1\nI0217 13:57:29.315516    1505 log.go:172] (0xc0007e8370) (0xc00087c000) Stream removed, broadcasting: 5\nI0217 13:57:29.315544    1505 log.go:172] (0xc0005f6820) (1) Data frame handling\nI0217 13:57:29.315554    1505 log.go:172] (0xc0005f6820) (1) Data frame sent\nI0217 13:57:29.315624    1505 log.go:172] (0xc0007e8370) (0xc0005f6820) Stream removed, broadcasting: 1\nI0217 13:57:29.316218    1505 log.go:172] (0xc0007e8370) (0xc000944000) Stream removed, broadcasting: 3\nI0217 13:57:29.316235    1505 log.go:172] (0xc0007e8370) Go away received\nI0217 13:57:29.316448    1505 log.go:172] (0xc0007e8370) (0xc0005f6820) Stream removed, broadcasting: 1\nI0217 13:57:29.316504    1505 log.go:172] (0xc0007e8370) (0xc000944000) Stream removed, broadcasting: 3\nI0217 13:57:29.316535    1505 log.go:172] (0xc0007e8370) (0xc00087c000) Stream removed, broadcasting: 5\n"
Feb 17 13:57:29.324: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 17 13:57:29.324: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 17 13:57:29.324: INFO: Waiting for statefulset status.replicas updated to 0
Feb 17 13:57:29.330: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Feb 17 13:57:39.342: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 17 13:57:39.342: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb 17 13:57:39.342: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 17 13:57:39.367: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 17 13:57:39.367: INFO: ss-0  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:56:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:56:52 +0000 UTC  }]
Feb 17 13:57:39.367: INFO: ss-1  iruya-server-sfge57q7djm7  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:15 +0000 UTC  }]
Feb 17 13:57:39.367: INFO: ss-2  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:15 +0000 UTC  }]
Feb 17 13:57:39.367: INFO: 
Feb 17 13:57:39.367: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 17 13:57:41.188: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 17 13:57:41.188: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:56:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:56:52 +0000 UTC  }]
Feb 17 13:57:41.188: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:15 +0000 UTC  }]
Feb 17 13:57:41.188: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:15 +0000 UTC  }]
Feb 17 13:57:41.188: INFO: 
Feb 17 13:57:41.188: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 17 13:57:42.206: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 17 13:57:42.206: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:56:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:56:52 +0000 UTC  }]
Feb 17 13:57:42.206: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:15 +0000 UTC  }]
Feb 17 13:57:42.206: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:15 +0000 UTC  }]
Feb 17 13:57:42.206: INFO: 
Feb 17 13:57:42.206: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 17 13:57:43.521: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 17 13:57:43.521: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:56:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:56:52 +0000 UTC  }]
Feb 17 13:57:43.521: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:15 +0000 UTC  }]
Feb 17 13:57:43.521: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:15 +0000 UTC  }]
Feb 17 13:57:43.521: INFO: 
Feb 17 13:57:43.521: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 17 13:57:44.537: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 17 13:57:44.537: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:56:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:56:52 +0000 UTC  }]
Feb 17 13:57:44.537: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:15 +0000 UTC  }]
Feb 17 13:57:44.537: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:15 +0000 UTC  }]
Feb 17 13:57:44.537: INFO: 
Feb 17 13:57:44.537: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 17 13:57:45.544: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 17 13:57:45.544: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:56:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:56:52 +0000 UTC  }]
Feb 17 13:57:45.544: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:15 +0000 UTC  }]
Feb 17 13:57:45.545: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:15 +0000 UTC  }]
Feb 17 13:57:45.545: INFO: 
Feb 17 13:57:45.545: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 17 13:57:46.565: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb 17 13:57:46.565: INFO: ss-0  iruya-node  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:56:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:56:52 +0000 UTC  }]
Feb 17 13:57:46.566: INFO: ss-2  iruya-node  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:15 +0000 UTC  }]
Feb 17 13:57:46.566: INFO: 
Feb 17 13:57:46.566: INFO: StatefulSet ss has not reached scale 0, at 2
Feb 17 13:57:47.581: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb 17 13:57:47.581: INFO: ss-0  iruya-node  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:56:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:56:52 +0000 UTC  }]
Feb 17 13:57:47.581: INFO: ss-2  iruya-node  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:15 +0000 UTC  }]
Feb 17 13:57:47.581: INFO: 
Feb 17 13:57:47.581: INFO: StatefulSet ss has not reached scale 0, at 2
Feb 17 13:57:48.598: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb 17 13:57:48.598: INFO: ss-0  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:56:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:56:52 +0000 UTC  }]
Feb 17 13:57:48.598: INFO: ss-2  iruya-node  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 13:57:15 +0000 UTC  }]
Feb 17 13:57:48.599: INFO: 
Feb 17 13:57:48.599: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-329
Feb 17 13:57:49.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 17 13:57:49.846: INFO: rc: 1
Feb 17 13:57:49.846: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc002464b10 exit status 1   true [0xc000ba8508 0xc000ba8590 0xc000ba85e0] [0xc000ba8508 0xc000ba8590 0xc000ba85e0] [0xc000ba8538 0xc000ba85c8] [0xba6c50 0xba6c50] 0xc00242d1a0 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Feb 17 13:57:59.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 17 13:57:59.965: INFO: rc: 1
Feb 17 13:57:59.965: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002464bd0 exit status 1   true [0xc000ba85f8 0xc000ba8638 0xc000ba8678] [0xc000ba85f8 0xc000ba8638 0xc000ba8678] [0xc000ba8628 0xc000ba8660] [0xba6c50 0xba6c50] 0xc00242dce0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 17 13:58:09.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 17 13:58:10.113: INFO: rc: 1
Feb 17 13:58:10.113: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00280cc30 exit status 1   true [0xc003222020 0xc003222048 0xc003222078] [0xc003222020 0xc003222048 0xc003222078] [0xc003222030 0xc003222070] [0xba6c50 0xba6c50] 0xc001b74ea0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 17 13:58:20.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 17 13:58:20.251: INFO: rc: 1
Feb 17 13:58:20.251: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00280cd20 exit status 1   true [0xc003222080 0xc003222098 0xc0032220b0] [0xc003222080 0xc003222098 0xc0032220b0] [0xc003222090 0xc0032220a8] [0xba6c50 0xba6c50] 0xc001959e60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 17 13:58:30.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 17 13:58:30.382: INFO: rc: 1
Feb 17 13:58:30.382: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002464cc0 exit status 1   true [0xc000ba86c8 0xc000ba8718 0xc000ba8788] [0xc000ba86c8 0xc000ba8718 0xc000ba8788] [0xc000ba8700 0xc000ba8768] [0xba6c50 0xba6c50] 0xc001d5b3e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 17 13:58:40.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 17 13:58:40.529: INFO: rc: 1
Feb 17 13:58:40.529: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00280ce10 exit status 1   true [0xc0032220b8 0xc0032220d0 0xc0032220e8] [0xc0032220b8 0xc0032220d0 0xc0032220e8] [0xc0032220c8 0xc0032220e0] [0xba6c50 0xba6c50] 0xc0014ba720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 17 13:58:50.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 17 13:58:50.674: INFO: rc: 1
Feb 17 13:58:50.674: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0020cd170 exit status 1   true [0xc0016f93f8 0xc0016f94a0 0xc0016f95e8] [0xc0016f93f8 0xc0016f94a0 0xc0016f95e8] [0xc0016f9438 0xc0016f9598] [0xba6c50 0xba6c50] 0xc001777ec0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 17 13:59:00.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 17 13:59:00.788: INFO: rc: 1
Feb 17 13:59:00.788: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002464d80 exit status 1   true [0xc000ba87a8 0xc000ba8800 0xc000ba8898] [0xc000ba87a8 0xc000ba8800 0xc000ba8898] [0xc000ba87f0 0xc000ba8858] [0xba6c50 0xba6c50] 0xc0019095c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 17 13:59:10.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 17 13:59:10.934: INFO: rc: 1
Feb 17 13:59:10.935: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0020cd290 exit status 1   true [0xc0016f96b0 0xc0016f96e8 0xc0016f9730] [0xc0016f96b0 0xc0016f96e8 0xc0016f9730] [0xc0016f96d8 0xc0016f9718] [0xba6c50 0xba6c50] 0xc001436b40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 17 13:59:20.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 17 13:59:21.054: INFO: rc: 1
Feb 17 13:59:21.055: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00280cf00 exit status 1   true [0xc0032220f0 0xc003222108 0xc003222120] [0xc0032220f0 0xc003222108 0xc003222120] [0xc003222100 0xc003222118] [0xba6c50 0xba6c50] 0xc001c7f380 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 17 13:59:31.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 17 13:59:31.928: INFO: rc: 1
Feb 17 13:59:31.929: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002464e40 exit status 1   true [0xc000ba88a8 0xc000ba8948 0xc000ba8978] [0xc000ba88a8 0xc000ba8948 0xc000ba8978] [0xc000ba8940 0xc000ba8968] [0xba6c50 0xba6c50] 0xc0019b06c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 17 13:59:41.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 17 13:59:42.096: INFO: rc: 1
Feb 17 13:59:42.096: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0020cc090 exit status 1   true [0xc000537968 0xc000537dd0 0xc0001879c0] [0xc000537968 0xc000537dd0 0xc0001879c0] [0xc000537cf8 0xc000187878] [0xba6c50 0xba6c50] 0xc001980900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 17 13:59:52.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 17 13:59:52.250: INFO: rc: 1
Feb 17 13:59:52.250: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001e3c0c0 exit status 1   true [0xc0016f8010 0xc0016f8220 0xc0016f8328] [0xc0016f8010 0xc0016f8220 0xc0016f8328] [0xc0016f81d8 0xc0016f8318] [0xba6c50 0xba6c50] 0xc0010afd40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 17 14:00:02.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 17 14:00:03.076: INFO: rc: 1
Feb 17 14:00:03.076: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0020cc180 exit status 1   true [0xc000187a90 0xc000187cc0 0xc0020f6010] [0xc000187a90 0xc000187cc0 0xc0020f6010] [0xc000187c60 0xc0020f6008] [0xba6c50 0xba6c50] 0xc0018d2d80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 17 14:00:13.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 17 14:00:13.195: INFO: rc: 1
Feb 17 14:00:13.195: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0021500c0 exit status 1   true [0xc000ba8100 0xc000ba82c8 0xc000ba8348] [0xc000ba8100 0xc000ba82c8 0xc000ba8348] [0xc000ba8238 0xc000ba8340] [0xba6c50 0xba6c50] 0xc001d5bec0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 17 14:00:23.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 17 14:00:23.338: INFO: rc: 1
Feb 17 14:00:23.338: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0020cc240 exit status 1   true [0xc0020f6018 0xc0020f6030 0xc0020f6048] [0xc0020f6018 0xc0020f6030 0xc0020f6048] [0xc0020f6028 0xc0020f6040] [0xba6c50 0xba6c50] 0xc0010a9500 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 17 14:00:33.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 17 14:00:33.537: INFO: rc: 1
Feb 17 14:00:33.537: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001e3c180 exit status 1   true [0xc0016f8338 0xc0016f8560 0xc0016f86f0] [0xc0016f8338 0xc0016f8560 0xc0016f86f0] [0xc0016f8490 0xc0016f86b0] [0xba6c50 0xba6c50] 0xc0019371a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 17 14:00:43.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 17 14:00:43.701: INFO: rc: 1
Feb 17 14:00:43.701: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0020cc330 exit status 1   true [0xc0020f6050 0xc0020f6068 0xc0020f6080] [0xc0020f6050 0xc0020f6068 0xc0020f6080] [0xc0020f6060 0xc0020f6078] [0xba6c50 0xba6c50] 0xc001776720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 17 14:00:53.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 17 14:00:53.884: INFO: rc: 1
Feb 17 14:00:53.884: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0020cc3f0 exit status 1   true [0xc0020f6088 0xc0020f60a0 0xc0020f60b8] [0xc0020f6088 0xc0020f60a0 0xc0020f60b8] [0xc0020f6098 0xc0020f60b0] [0xba6c50 0xba6c50] 0xc0017774a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 17 14:01:03.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 17 14:01:03.992: INFO: rc: 1
Feb 17 14:01:03.992: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0021501e0 exit status 1   true [0xc000ba8350 0xc000ba8440 0xc000ba84c8] [0xc000ba8350 0xc000ba8440 0xc000ba84c8] [0xc000ba83b8 0xc000ba84b8] [0xba6c50 0xba6c50] 0xc0025be3c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 17 14:01:13.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 17 14:01:14.104: INFO: rc: 1
Feb 17 14:01:14.104: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0021502d0 exit status 1   true [0xc000ba84d0 0xc000ba8528 0xc000ba85b8] [0xc000ba84d0 0xc000ba8528 0xc000ba85b8] [0xc000ba8508 0xc000ba8590] [0xba6c50 0xba6c50] 0xc003164120 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 17 14:01:24.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 17 14:01:24.190: INFO: rc: 1
Feb 17 14:01:24.191: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001e3c240 exit status 1   true [0xc0016f8790 0xc0016f8848 0xc0016f8a90] [0xc0016f8790 0xc0016f8848 0xc0016f8a90] [0xc0016f8808 0xc0016f89d0] [0xba6c50 0xba6c50] 0xc00242c8a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 17 14:01:34.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 17 14:01:34.304: INFO: rc: 1
Feb 17 14:01:34.305: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0020cc4e0 exit status 1   true [0xc0020f60c0 0xc0020f60d8 0xc0020f60f0] [0xc0020f60c0 0xc0020f60d8 0xc0020f60f0] [0xc0020f60d0 0xc0020f60e8] [0xba6c50 0xba6c50] 0xc001777ec0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 17 14:01:44.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 17 14:01:44.437: INFO: rc: 1
Feb 17 14:01:44.437: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001e3c090 exit status 1   true [0xc0001879c0 0xc000187c60 0xc000537968] [0xc0001879c0 0xc000187c60 0xc000537968] [0xc000187b98 0xc0005375b0] [0xba6c50 0xba6c50] 0xc0025bf620 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 17 14:01:54.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 17 14:01:54.596: INFO: rc: 1
Feb 17 14:01:54.596: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001e3c1e0 exit status 1   true [0xc000537b60 0xc000537ed0 0xc0016f81d8] [0xc000537b60 0xc000537ed0 0xc0016f81d8] [0xc000537dd0 0xc0016f80c8] [0xba6c50 0xba6c50] 0xc001b74ea0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 17 14:02:04.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 17 14:02:04.747: INFO: rc: 1
Feb 17 14:02:04.747: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002150090 exit status 1   true [0xc000ba8100 0xc000ba82c8 0xc000ba8348] [0xc000ba8100 0xc000ba82c8 0xc000ba8348] [0xc000ba8238 0xc000ba8340] [0xba6c50 0xba6c50] 0xc0010a8a20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 17 14:02:14.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 17 14:02:14.883: INFO: rc: 1
Feb 17 14:02:14.883: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0020cc0f0 exit status 1   true [0xc0020f6000 0xc0020f6018 0xc0020f6030] [0xc0020f6000 0xc0020f6018 0xc0020f6030] [0xc0020f6010 0xc0020f6028] [0xba6c50 0xba6c50] 0xc001d5b3e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 17 14:02:24.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 17 14:02:25.013: INFO: rc: 1
Feb 17 14:02:25.013: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002150180 exit status 1   true [0xc000ba8350 0xc000ba8440 0xc000ba84c8] [0xc000ba8350 0xc000ba8440 0xc000ba84c8] [0xc000ba83b8 0xc000ba84b8] [0xba6c50 0xba6c50] 0xc0014bbc80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 17 14:02:35.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 17 14:02:35.215: INFO: rc: 1
Feb 17 14:02:35.215: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0020cc270 exit status 1   true [0xc0020f6038 0xc0020f6050 0xc0020f6068] [0xc0020f6038 0xc0020f6050 0xc0020f6068] [0xc0020f6048 0xc0020f6060] [0xba6c50 0xba6c50] 0xc001483f20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 17 14:02:45.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 17 14:02:45.372: INFO: rc: 1
Feb 17 14:02:45.373: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002150270 exit status 1   true [0xc000ba84d0 0xc000ba8528 0xc000ba85b8] [0xc000ba84d0 0xc000ba8528 0xc000ba85b8] [0xc000ba8508 0xc000ba8590] [0xba6c50 0xba6c50] 0xc0016ca000 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 17 14:02:55.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 17 14:02:55.527: INFO: rc: 1
Feb 17 14:02:55.527: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Feb 17 14:02:55.527: INFO: Scaling statefulset ss to 0
Feb 17 14:02:55.546: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb 17 14:02:55.550: INFO: Deleting all statefulset in ns statefulset-329
Feb 17 14:02:55.554: INFO: Scaling statefulset ss to 0
Feb 17 14:02:55.567: INFO: Waiting for statefulset status.replicas updated to 0
Feb 17 14:02:55.571: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:02:55.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-329" for this suite.
Feb 17 14:03:01.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:03:01.776: INFO: namespace statefulset-329 deletion completed in 6.153607103s

• [SLOW TEST:369.387 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:03:01.778: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:03:12.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3491" for this suite.
Feb 17 14:03:58.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:03:58.214: INFO: namespace kubelet-test-3491 deletion completed in 46.18061085s

• [SLOW TEST:56.436 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:03:58.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb 17 14:03:58.325: INFO: Waiting up to 5m0s for pod "pod-b231b53b-db53-4539-910e-1dd92641f499" in namespace "emptydir-9734" to be "success or failure"
Feb 17 14:03:58.332: INFO: Pod "pod-b231b53b-db53-4539-910e-1dd92641f499": Phase="Pending", Reason="", readiness=false. Elapsed: 7.427004ms
Feb 17 14:04:00.342: INFO: Pod "pod-b231b53b-db53-4539-910e-1dd92641f499": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017030187s
Feb 17 14:04:02.348: INFO: Pod "pod-b231b53b-db53-4539-910e-1dd92641f499": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022798392s
Feb 17 14:04:04.354: INFO: Pod "pod-b231b53b-db53-4539-910e-1dd92641f499": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028820362s
Feb 17 14:04:06.368: INFO: Pod "pod-b231b53b-db53-4539-910e-1dd92641f499": Phase="Pending", Reason="", readiness=false. Elapsed: 8.042681402s
Feb 17 14:04:08.379: INFO: Pod "pod-b231b53b-db53-4539-910e-1dd92641f499": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.053997403s
STEP: Saw pod success
Feb 17 14:04:08.379: INFO: Pod "pod-b231b53b-db53-4539-910e-1dd92641f499" satisfied condition "success or failure"
Feb 17 14:04:08.390: INFO: Trying to get logs from node iruya-node pod pod-b231b53b-db53-4539-910e-1dd92641f499 container test-container: 
STEP: delete the pod
Feb 17 14:04:08.531: INFO: Waiting for pod pod-b231b53b-db53-4539-910e-1dd92641f499 to disappear
Feb 17 14:04:08.540: INFO: Pod pod-b231b53b-db53-4539-910e-1dd92641f499 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:04:08.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9734" for this suite.
Feb 17 14:04:14.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:04:14.732: INFO: namespace emptydir-9734 deletion completed in 6.184727056s

• [SLOW TEST:16.517 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:04:14.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:04:22.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4201" for this suite.
Feb 17 14:05:08.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:05:09.077: INFO: namespace kubelet-test-4201 deletion completed in 46.150206126s

• [SLOW TEST:54.343 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:05:09.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb 17 14:05:19.756: INFO: Successfully updated pod "annotationupdate6075a6d7-4667-41a3-84d6-325fe55dbe36"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:05:21.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6198" for this suite.
Feb 17 14:05:43.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:05:44.037: INFO: namespace projected-6198 deletion completed in 22.189986686s

• [SLOW TEST:34.959 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:05:44.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-2846
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Feb 17 14:05:44.168: INFO: Found 0 stateful pods, waiting for 3
Feb 17 14:05:54.185: INFO: Found 2 stateful pods, waiting for 3
Feb 17 14:06:04.187: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 17 14:06:04.188: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 17 14:06:04.188: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 17 14:06:14.178: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 17 14:06:14.178: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 17 14:06:14.178: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Feb 17 14:06:14.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2846 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 17 14:06:14.551: INFO: stderr: "I0217 14:06:14.327379    2090 log.go:172] (0xc000702370) (0xc0009905a0) Create stream\nI0217 14:06:14.327567    2090 log.go:172] (0xc000702370) (0xc0009905a0) Stream added, broadcasting: 1\nI0217 14:06:14.333528    2090 log.go:172] (0xc000702370) Reply frame received for 1\nI0217 14:06:14.333594    2090 log.go:172] (0xc000702370) (0xc000448320) Create stream\nI0217 14:06:14.333615    2090 log.go:172] (0xc000702370) (0xc000448320) Stream added, broadcasting: 3\nI0217 14:06:14.336069    2090 log.go:172] (0xc000702370) Reply frame received for 3\nI0217 14:06:14.336133    2090 log.go:172] (0xc000702370) (0xc0002d4000) Create stream\nI0217 14:06:14.336143    2090 log.go:172] (0xc000702370) (0xc0002d4000) Stream added, broadcasting: 5\nI0217 14:06:14.338860    2090 log.go:172] (0xc000702370) Reply frame received for 5\nI0217 14:06:14.404208    2090 log.go:172] (0xc000702370) Data frame received for 5\nI0217 14:06:14.404255    2090 log.go:172] (0xc0002d4000) (5) Data frame handling\nI0217 14:06:14.404271    2090 log.go:172] (0xc0002d4000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0217 14:06:14.450084    2090 log.go:172] (0xc000702370) Data frame received for 3\nI0217 14:06:14.450178    2090 log.go:172] (0xc000448320) (3) Data frame handling\nI0217 14:06:14.450221    2090 log.go:172] (0xc000448320) (3) Data frame sent\nI0217 14:06:14.540407    2090 log.go:172] (0xc000702370) Data frame received for 1\nI0217 14:06:14.540604    2090 log.go:172] (0xc000702370) (0xc000448320) Stream removed, broadcasting: 3\nI0217 14:06:14.540782    2090 log.go:172] (0xc0009905a0) (1) Data frame handling\nI0217 14:06:14.540845    2090 log.go:172] (0xc0009905a0) (1) Data frame sent\nI0217 14:06:14.540889    2090 log.go:172] (0xc000702370) (0xc0002d4000) Stream removed, broadcasting: 5\nI0217 14:06:14.540940    2090 log.go:172] (0xc000702370) (0xc0009905a0) Stream removed, broadcasting: 1\nI0217 14:06:14.540954    2090 log.go:172] (0xc000702370) Go away received\nI0217 14:06:14.542203    2090 log.go:172] (0xc000702370) (0xc0009905a0) Stream removed, broadcasting: 1\nI0217 14:06:14.542380    2090 log.go:172] (0xc000702370) (0xc000448320) Stream removed, broadcasting: 3\nI0217 14:06:14.542418    2090 log.go:172] (0xc000702370) (0xc0002d4000) Stream removed, broadcasting: 5\n"
Feb 17 14:06:14.551: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 17 14:06:14.551: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb 17 14:06:24.605: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Feb 17 14:06:34.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2846 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 17 14:06:35.030: INFO: stderr: "I0217 14:06:34.819983    2108 log.go:172] (0xc0008322c0) (0xc00064c960) Create stream\nI0217 14:06:34.820189    2108 log.go:172] (0xc0008322c0) (0xc00064c960) Stream added, broadcasting: 1\nI0217 14:06:34.824312    2108 log.go:172] (0xc0008322c0) Reply frame received for 1\nI0217 14:06:34.824341    2108 log.go:172] (0xc0008322c0) (0xc000368000) Create stream\nI0217 14:06:34.824350    2108 log.go:172] (0xc0008322c0) (0xc000368000) Stream added, broadcasting: 3\nI0217 14:06:34.825216    2108 log.go:172] (0xc0008322c0) Reply frame received for 3\nI0217 14:06:34.825232    2108 log.go:172] (0xc0008322c0) (0xc0003680a0) Create stream\nI0217 14:06:34.825237    2108 log.go:172] (0xc0008322c0) (0xc0003680a0) Stream added, broadcasting: 5\nI0217 14:06:34.826221    2108 log.go:172] (0xc0008322c0) Reply frame received for 5\nI0217 14:06:34.938146    2108 log.go:172] (0xc0008322c0) Data frame received for 5\nI0217 14:06:34.938299    2108 log.go:172] (0xc0003680a0) (5) Data frame handling\nI0217 14:06:34.938308    2108 log.go:172] (0xc0003680a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0217 14:06:34.938316    2108 log.go:172] (0xc0008322c0) Data frame received for 3\nI0217 14:06:34.938320    2108 log.go:172] (0xc000368000) (3) Data frame handling\nI0217 14:06:34.938323    2108 log.go:172] (0xc000368000) (3) Data frame sent\nI0217 14:06:35.024533    2108 log.go:172] (0xc0008322c0) Data frame received for 1\nI0217 14:06:35.024568    2108 log.go:172] (0xc00064c960) (1) Data frame handling\nI0217 14:06:35.024579    2108 log.go:172] (0xc00064c960) (1) Data frame sent\nI0217 14:06:35.024790    2108 log.go:172] (0xc0008322c0) (0xc00064c960) Stream removed, broadcasting: 1\nI0217 14:06:35.025348    2108 log.go:172] (0xc0008322c0) (0xc000368000) Stream removed, broadcasting: 3\nI0217 14:06:35.025565    2108 log.go:172] (0xc0008322c0) (0xc0003680a0) Stream removed, broadcasting: 5\nI0217 14:06:35.025583    2108 log.go:172] (0xc0008322c0) Go away received\nI0217 14:06:35.025619    2108 log.go:172] (0xc0008322c0) (0xc00064c960) Stream removed, broadcasting: 1\nI0217 14:06:35.025632    2108 log.go:172] (0xc0008322c0) (0xc000368000) Stream removed, broadcasting: 3\nI0217 14:06:35.025639    2108 log.go:172] (0xc0008322c0) (0xc0003680a0) Stream removed, broadcasting: 5\n"
Feb 17 14:06:35.030: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 17 14:06:35.030: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 17 14:06:45.094: INFO: Waiting for StatefulSet statefulset-2846/ss2 to complete update
Feb 17 14:06:45.094: INFO: Waiting for Pod statefulset-2846/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 17 14:06:45.094: INFO: Waiting for Pod statefulset-2846/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 17 14:06:55.172: INFO: Waiting for StatefulSet statefulset-2846/ss2 to complete update
Feb 17 14:06:55.172: INFO: Waiting for Pod statefulset-2846/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 17 14:06:55.172: INFO: Waiting for Pod statefulset-2846/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 17 14:07:05.120: INFO: Waiting for StatefulSet statefulset-2846/ss2 to complete update
Feb 17 14:07:05.120: INFO: Waiting for Pod statefulset-2846/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 17 14:07:15.119: INFO: Waiting for StatefulSet statefulset-2846/ss2 to complete update
Feb 17 14:07:15.119: INFO: Waiting for Pod statefulset-2846/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 17 14:07:25.116: INFO: Waiting for StatefulSet statefulset-2846/ss2 to complete update
STEP: Rolling back to a previous revision
Feb 17 14:07:35.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2846 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 17 14:07:37.533: INFO: stderr: "I0217 14:07:37.275353    2122 log.go:172] (0xc0006bc370) (0xc00054a780) Create stream\nI0217 14:07:37.275449    2122 log.go:172] (0xc0006bc370) (0xc00054a780) Stream added, broadcasting: 1\nI0217 14:07:37.278517    2122 log.go:172] (0xc0006bc370) Reply frame received for 1\nI0217 14:07:37.278567    2122 log.go:172] (0xc0006bc370) (0xc0007580a0) Create stream\nI0217 14:07:37.278582    2122 log.go:172] (0xc0006bc370) (0xc0007580a0) Stream added, broadcasting: 3\nI0217 14:07:37.280254    2122 log.go:172] (0xc0006bc370) Reply frame received for 3\nI0217 14:07:37.280312    2122 log.go:172] (0xc0006bc370) (0xc00032a000) Create stream\nI0217 14:07:37.280331    2122 log.go:172] (0xc0006bc370) (0xc00032a000) Stream added, broadcasting: 5\nI0217 14:07:37.281877    2122 log.go:172] (0xc0006bc370) Reply frame received for 5\nI0217 14:07:37.413924    2122 log.go:172] (0xc0006bc370) Data frame received for 5\nI0217 14:07:37.413978    2122 log.go:172] (0xc00032a000) (5) Data frame handling\nI0217 14:07:37.413996    2122 log.go:172] (0xc00032a000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0217 14:07:37.445040    2122 log.go:172] (0xc0006bc370) Data frame received for 3\nI0217 14:07:37.445071    2122 log.go:172] (0xc0007580a0) (3) Data frame handling\nI0217 14:07:37.445085    2122 log.go:172] (0xc0007580a0) (3) Data frame sent\nI0217 14:07:37.521779    2122 log.go:172] (0xc0006bc370) (0xc0007580a0) Stream removed, broadcasting: 3\nI0217 14:07:37.521926    2122 log.go:172] (0xc0006bc370) Data frame received for 1\nI0217 14:07:37.521992    2122 log.go:172] (0xc0006bc370) (0xc00032a000) Stream removed, broadcasting: 5\nI0217 14:07:37.522051    2122 log.go:172] (0xc00054a780) (1) Data frame handling\nI0217 14:07:37.522076    2122 log.go:172] (0xc00054a780) (1) Data frame sent\nI0217 14:07:37.522096    2122 log.go:172] (0xc0006bc370) (0xc00054a780) Stream removed, broadcasting: 1\nI0217 14:07:37.522119    2122 log.go:172] (0xc0006bc370) Go away received\nI0217 14:07:37.522727    2122 log.go:172] (0xc0006bc370) (0xc00054a780) Stream removed, broadcasting: 1\nI0217 14:07:37.522741    2122 log.go:172] (0xc0006bc370) (0xc0007580a0) Stream removed, broadcasting: 3\nI0217 14:07:37.522755    2122 log.go:172] (0xc0006bc370) (0xc00032a000) Stream removed, broadcasting: 5\n"
Feb 17 14:07:37.534: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 17 14:07:37.534: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 17 14:07:47.594: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Feb 17 14:07:57.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2846 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 17 14:07:58.055: INFO: stderr: "I0217 14:07:57.864185    2154 log.go:172] (0xc0009ee000) (0xc00097a140) Create stream\nI0217 14:07:57.864285    2154 log.go:172] (0xc0009ee000) (0xc00097a140) Stream added, broadcasting: 1\nI0217 14:07:57.867757    2154 log.go:172] (0xc0009ee000) Reply frame received for 1\nI0217 14:07:57.867828    2154 log.go:172] (0xc0009ee000) (0xc00059c320) Create stream\nI0217 14:07:57.867848    2154 log.go:172] (0xc0009ee000) (0xc00059c320) Stream added, broadcasting: 3\nI0217 14:07:57.869339    2154 log.go:172] (0xc0009ee000) Reply frame received for 3\nI0217 14:07:57.869382    2154 log.go:172] (0xc0009ee000) (0xc000350000) Create stream\nI0217 14:07:57.869410    2154 log.go:172] (0xc0009ee000) (0xc000350000) Stream added, broadcasting: 5\nI0217 14:07:57.870639    2154 log.go:172] (0xc0009ee000) Reply frame received for 5\nI0217 14:07:57.949771    2154 log.go:172] (0xc0009ee000) Data frame received for 3\nI0217 14:07:57.949937    2154 log.go:172] (0xc00059c320) (3) Data frame handling\nI0217 14:07:57.949970    2154 log.go:172] (0xc00059c320) (3) Data frame sent\nI0217 14:07:57.950097    2154 log.go:172] (0xc0009ee000) Data frame received for 5\nI0217 14:07:57.950121    2154 log.go:172] (0xc000350000) (5) Data frame handling\nI0217 14:07:57.950147    2154 log.go:172] (0xc000350000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0217 14:07:58.046823    2154 log.go:172] (0xc0009ee000) (0xc00059c320) Stream removed, broadcasting: 3\nI0217 14:07:58.047025    2154 log.go:172] (0xc0009ee000) Data frame received for 1\nI0217 14:07:58.047080    2154 log.go:172] (0xc00097a140) (1) Data frame handling\nI0217 14:07:58.047102    2154 log.go:172] (0xc00097a140) (1) Data frame sent\nI0217 14:07:58.047117    2154 log.go:172] (0xc0009ee000) (0xc00097a140) Stream removed, broadcasting: 1\nI0217 14:07:58.047210    2154 log.go:172] (0xc0009ee000) (0xc000350000) Stream removed, broadcasting: 5\nI0217 14:07:58.047438    2154 log.go:172] (0xc0009ee000) Go away received\nI0217 14:07:58.047877    2154 log.go:172] (0xc0009ee000) (0xc00097a140) Stream removed, broadcasting: 1\nI0217 14:07:58.047900    2154 log.go:172] (0xc0009ee000) (0xc00059c320) Stream removed, broadcasting: 3\nI0217 14:07:58.047909    2154 log.go:172] (0xc0009ee000) (0xc000350000) Stream removed, broadcasting: 5\n"
Feb 17 14:07:58.056: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 17 14:07:58.056: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 17 14:08:08.089: INFO: Waiting for StatefulSet statefulset-2846/ss2 to complete update
Feb 17 14:08:08.089: INFO: Waiting for Pod statefulset-2846/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 17 14:08:08.089: INFO: Waiting for Pod statefulset-2846/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 17 14:08:18.102: INFO: Waiting for StatefulSet statefulset-2846/ss2 to complete update
Feb 17 14:08:18.102: INFO: Waiting for Pod statefulset-2846/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 17 14:08:18.102: INFO: Waiting for Pod statefulset-2846/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 17 14:08:28.118: INFO: Waiting for StatefulSet statefulset-2846/ss2 to complete update
Feb 17 14:08:28.118: INFO: Waiting for Pod statefulset-2846/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 17 14:08:38.157: INFO: Waiting for StatefulSet statefulset-2846/ss2 to complete update
Feb 17 14:08:38.157: INFO: Waiting for Pod statefulset-2846/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 17 14:08:48.122: INFO: Waiting for StatefulSet statefulset-2846/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb 17 14:08:58.112: INFO: Deleting all statefulset in ns statefulset-2846
Feb 17 14:08:58.117: INFO: Scaling statefulset ss2 to 0
Feb 17 14:09:28.185: INFO: Waiting for statefulset status.replicas updated to 0
Feb 17 14:09:28.191: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:09:28.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2846" for this suite.
Feb 17 14:09:36.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:09:36.391: INFO: namespace statefulset-2846 deletion completed in 8.174353532s

• [SLOW TEST:232.353 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:09:36.391: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 17 14:09:36.526: INFO: Waiting up to 5m0s for pod "downwardapi-volume-96abb7c9-f743-4211-8d2e-53373a0626d5" in namespace "downward-api-2352" to be "success or failure"
Feb 17 14:09:36.536: INFO: Pod "downwardapi-volume-96abb7c9-f743-4211-8d2e-53373a0626d5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.445147ms
Feb 17 14:09:38.547: INFO: Pod "downwardapi-volume-96abb7c9-f743-4211-8d2e-53373a0626d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021465754s
Feb 17 14:09:40.598: INFO: Pod "downwardapi-volume-96abb7c9-f743-4211-8d2e-53373a0626d5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072404508s
Feb 17 14:09:42.617: INFO: Pod "downwardapi-volume-96abb7c9-f743-4211-8d2e-53373a0626d5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091164658s
Feb 17 14:09:44.636: INFO: Pod "downwardapi-volume-96abb7c9-f743-4211-8d2e-53373a0626d5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.110109483s
Feb 17 14:09:46.646: INFO: Pod "downwardapi-volume-96abb7c9-f743-4211-8d2e-53373a0626d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.120358017s
STEP: Saw pod success
Feb 17 14:09:46.646: INFO: Pod "downwardapi-volume-96abb7c9-f743-4211-8d2e-53373a0626d5" satisfied condition "success or failure"
Feb 17 14:09:46.650: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-96abb7c9-f743-4211-8d2e-53373a0626d5 container client-container: 
STEP: delete the pod
Feb 17 14:09:46.724: INFO: Waiting for pod downwardapi-volume-96abb7c9-f743-4211-8d2e-53373a0626d5 to disappear
Feb 17 14:09:46.731: INFO: Pod downwardapi-volume-96abb7c9-f743-4211-8d2e-53373a0626d5 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:09:46.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2352" for this suite.
Feb 17 14:09:52.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:09:52.923: INFO: namespace downward-api-2352 deletion completed in 6.184205477s

• [SLOW TEST:16.532 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:09:52.925: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-64137052-6ebe-4e88-b4f9-56c39992107f
STEP: Creating a pod to test consume configMaps
Feb 17 14:09:53.090: INFO: Waiting up to 5m0s for pod "pod-configmaps-8f2256ce-28c7-4d64-bdcc-c40272ba2e74" in namespace "configmap-6601" to be "success or failure"
Feb 17 14:09:53.132: INFO: Pod "pod-configmaps-8f2256ce-28c7-4d64-bdcc-c40272ba2e74": Phase="Pending", Reason="", readiness=false. Elapsed: 41.141406ms
Feb 17 14:09:55.143: INFO: Pod "pod-configmaps-8f2256ce-28c7-4d64-bdcc-c40272ba2e74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052632413s
Feb 17 14:09:57.151: INFO: Pod "pod-configmaps-8f2256ce-28c7-4d64-bdcc-c40272ba2e74": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060260036s
Feb 17 14:09:59.159: INFO: Pod "pod-configmaps-8f2256ce-28c7-4d64-bdcc-c40272ba2e74": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068802026s
Feb 17 14:10:01.174: INFO: Pod "pod-configmaps-8f2256ce-28c7-4d64-bdcc-c40272ba2e74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.083688621s
STEP: Saw pod success
Feb 17 14:10:01.174: INFO: Pod "pod-configmaps-8f2256ce-28c7-4d64-bdcc-c40272ba2e74" satisfied condition "success or failure"
Feb 17 14:10:01.180: INFO: Trying to get logs from node iruya-node pod pod-configmaps-8f2256ce-28c7-4d64-bdcc-c40272ba2e74 container configmap-volume-test: 
STEP: delete the pod
Feb 17 14:10:01.296: INFO: Waiting for pod pod-configmaps-8f2256ce-28c7-4d64-bdcc-c40272ba2e74 to disappear
Feb 17 14:10:01.302: INFO: Pod pod-configmaps-8f2256ce-28c7-4d64-bdcc-c40272ba2e74 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:10:01.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6601" for this suite.
Feb 17 14:10:07.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:10:07.457: INFO: namespace configmap-6601 deletion completed in 6.149186137s

• [SLOW TEST:14.532 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:10:07.457: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-549h
STEP: Creating a pod to test atomic-volume-subpath
Feb 17 14:10:07.581: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-549h" in namespace "subpath-6584" to be "success or failure"
Feb 17 14:10:07.590: INFO: Pod "pod-subpath-test-configmap-549h": Phase="Pending", Reason="", readiness=false. Elapsed: 9.783495ms
Feb 17 14:10:09.621: INFO: Pod "pod-subpath-test-configmap-549h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040585224s
Feb 17 14:10:11.630: INFO: Pod "pod-subpath-test-configmap-549h": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049366732s
Feb 17 14:10:13.645: INFO: Pod "pod-subpath-test-configmap-549h": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063996284s
Feb 17 14:10:15.655: INFO: Pod "pod-subpath-test-configmap-549h": Phase="Pending", Reason="", readiness=false. Elapsed: 8.074788615s
Feb 17 14:10:17.665: INFO: Pod "pod-subpath-test-configmap-549h": Phase="Running", Reason="", readiness=true. Elapsed: 10.083889108s
Feb 17 14:10:19.678: INFO: Pod "pod-subpath-test-configmap-549h": Phase="Running", Reason="", readiness=true. Elapsed: 12.096969825s
Feb 17 14:10:21.686: INFO: Pod "pod-subpath-test-configmap-549h": Phase="Running", Reason="", readiness=true. Elapsed: 14.105508685s
Feb 17 14:10:23.699: INFO: Pod "pod-subpath-test-configmap-549h": Phase="Running", Reason="", readiness=true. Elapsed: 16.118721997s
Feb 17 14:10:25.709: INFO: Pod "pod-subpath-test-configmap-549h": Phase="Running", Reason="", readiness=true. Elapsed: 18.12880529s
Feb 17 14:10:27.728: INFO: Pod "pod-subpath-test-configmap-549h": Phase="Running", Reason="", readiness=true. Elapsed: 20.147198979s
Feb 17 14:10:29.743: INFO: Pod "pod-subpath-test-configmap-549h": Phase="Running", Reason="", readiness=true. Elapsed: 22.162047321s
Feb 17 14:10:31.752: INFO: Pod "pod-subpath-test-configmap-549h": Phase="Running", Reason="", readiness=true. Elapsed: 24.171663054s
Feb 17 14:10:33.761: INFO: Pod "pod-subpath-test-configmap-549h": Phase="Running", Reason="", readiness=true. Elapsed: 26.180674577s
Feb 17 14:10:35.773: INFO: Pod "pod-subpath-test-configmap-549h": Phase="Running", Reason="", readiness=true. Elapsed: 28.192335401s
Feb 17 14:10:37.780: INFO: Pod "pod-subpath-test-configmap-549h": Phase="Running", Reason="", readiness=true. Elapsed: 30.199740848s
Feb 17 14:10:39.799: INFO: Pod "pod-subpath-test-configmap-549h": Phase="Running", Reason="", readiness=true. Elapsed: 32.218830231s
Feb 17 14:10:41.809: INFO: Pod "pod-subpath-test-configmap-549h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.22861922s
STEP: Saw pod success
Feb 17 14:10:41.809: INFO: Pod "pod-subpath-test-configmap-549h" satisfied condition "success or failure"
Feb 17 14:10:41.817: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-549h container test-container-subpath-configmap-549h: 
STEP: delete the pod
Feb 17 14:10:42.021: INFO: Waiting for pod pod-subpath-test-configmap-549h to disappear
Feb 17 14:10:42.029: INFO: Pod pod-subpath-test-configmap-549h no longer exists
STEP: Deleting pod pod-subpath-test-configmap-549h
Feb 17 14:10:42.030: INFO: Deleting pod "pod-subpath-test-configmap-549h" in namespace "subpath-6584"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:10:42.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6584" for this suite.
Feb 17 14:10:48.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:10:48.292: INFO: namespace subpath-6584 deletion completed in 6.244141965s

• [SLOW TEST:40.835 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:10:48.293: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Feb 17 14:10:48.424: INFO: Waiting up to 5m0s for pod "var-expansion-1a0c3a7b-74ee-47b0-956c-08ddf6d27036" in namespace "var-expansion-816" to be "success or failure"
Feb 17 14:10:48.438: INFO: Pod "var-expansion-1a0c3a7b-74ee-47b0-956c-08ddf6d27036": Phase="Pending", Reason="", readiness=false. Elapsed: 13.489646ms
Feb 17 14:10:50.450: INFO: Pod "var-expansion-1a0c3a7b-74ee-47b0-956c-08ddf6d27036": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025801099s
Feb 17 14:10:52.462: INFO: Pod "var-expansion-1a0c3a7b-74ee-47b0-956c-08ddf6d27036": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037971284s
Feb 17 14:10:54.469: INFO: Pod "var-expansion-1a0c3a7b-74ee-47b0-956c-08ddf6d27036": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044552075s
Feb 17 14:10:56.489: INFO: Pod "var-expansion-1a0c3a7b-74ee-47b0-956c-08ddf6d27036": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064675375s
Feb 17 14:10:58.502: INFO: Pod "var-expansion-1a0c3a7b-74ee-47b0-956c-08ddf6d27036": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.077743554s
STEP: Saw pod success
Feb 17 14:10:58.502: INFO: Pod "var-expansion-1a0c3a7b-74ee-47b0-956c-08ddf6d27036" satisfied condition "success or failure"
Feb 17 14:10:58.507: INFO: Trying to get logs from node iruya-node pod var-expansion-1a0c3a7b-74ee-47b0-956c-08ddf6d27036 container dapi-container: 
STEP: delete the pod
Feb 17 14:10:58.564: INFO: Waiting for pod var-expansion-1a0c3a7b-74ee-47b0-956c-08ddf6d27036 to disappear
Feb 17 14:10:58.579: INFO: Pod var-expansion-1a0c3a7b-74ee-47b0-956c-08ddf6d27036 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:10:58.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-816" for this suite.
Feb 17 14:11:04.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:11:04.788: INFO: namespace var-expansion-816 deletion completed in 6.177273981s

• [SLOW TEST:16.495 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:11:04.788: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 17 14:11:14.867: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:11:14.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7315" for this suite.
Feb 17 14:11:20.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:11:21.074: INFO: namespace container-runtime-7315 deletion completed in 6.157158279s

• [SLOW TEST:16.286 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:11:21.074: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-5675/configmap-test-8c36942f-1355-4cbf-9496-aec45f6ab75a
STEP: Creating a pod to test consume configMaps
Feb 17 14:11:22.895: INFO: Waiting up to 5m0s for pod "pod-configmaps-58c19c45-bc79-4ba1-9a8a-f0269262236e" in namespace "configmap-5675" to be "success or failure"
Feb 17 14:11:24.295: INFO: Pod "pod-configmaps-58c19c45-bc79-4ba1-9a8a-f0269262236e": Phase="Pending", Reason="", readiness=false. Elapsed: 1.399363737s
Feb 17 14:11:26.365: INFO: Pod "pod-configmaps-58c19c45-bc79-4ba1-9a8a-f0269262236e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.469900371s
Feb 17 14:11:28.421: INFO: Pod "pod-configmaps-58c19c45-bc79-4ba1-9a8a-f0269262236e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.525297965s
Feb 17 14:11:30.432: INFO: Pod "pod-configmaps-58c19c45-bc79-4ba1-9a8a-f0269262236e": Phase="Pending", Reason="", readiness=false. Elapsed: 7.536576693s
Feb 17 14:11:32.440: INFO: Pod "pod-configmaps-58c19c45-bc79-4ba1-9a8a-f0269262236e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.54461779s
Feb 17 14:11:35.541: INFO: Pod "pod-configmaps-58c19c45-bc79-4ba1-9a8a-f0269262236e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.645132533s
STEP: Saw pod success
Feb 17 14:11:35.541: INFO: Pod "pod-configmaps-58c19c45-bc79-4ba1-9a8a-f0269262236e" satisfied condition "success or failure"
Feb 17 14:11:35.547: INFO: Trying to get logs from node iruya-node pod pod-configmaps-58c19c45-bc79-4ba1-9a8a-f0269262236e container env-test: 
STEP: delete the pod
Feb 17 14:11:35.724: INFO: Waiting for pod pod-configmaps-58c19c45-bc79-4ba1-9a8a-f0269262236e to disappear
Feb 17 14:11:35.733: INFO: Pod pod-configmaps-58c19c45-bc79-4ba1-9a8a-f0269262236e no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:11:35.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5675" for this suite.
Feb 17 14:11:41.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:11:41.965: INFO: namespace configmap-5675 deletion completed in 6.224506793s

• [SLOW TEST:20.892 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:11:41.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-6d9c2136-9114-4758-b7e0-8e2a5c25a566
STEP: Creating a pod to test consume secrets
Feb 17 14:11:42.185: INFO: Waiting up to 5m0s for pod "pod-secrets-ba5ecc87-b432-471d-a875-50d0479e03de" in namespace "secrets-5826" to be "success or failure"
Feb 17 14:11:42.274: INFO: Pod "pod-secrets-ba5ecc87-b432-471d-a875-50d0479e03de": Phase="Pending", Reason="", readiness=false. Elapsed: 88.393751ms
Feb 17 14:11:44.280: INFO: Pod "pod-secrets-ba5ecc87-b432-471d-a875-50d0479e03de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094385502s
Feb 17 14:11:46.327: INFO: Pod "pod-secrets-ba5ecc87-b432-471d-a875-50d0479e03de": Phase="Pending", Reason="", readiness=false. Elapsed: 4.141719138s
Feb 17 14:11:48.332: INFO: Pod "pod-secrets-ba5ecc87-b432-471d-a875-50d0479e03de": Phase="Pending", Reason="", readiness=false. Elapsed: 6.147212515s
Feb 17 14:11:50.591: INFO: Pod "pod-secrets-ba5ecc87-b432-471d-a875-50d0479e03de": Phase="Pending", Reason="", readiness=false. Elapsed: 8.405604079s
Feb 17 14:11:52.597: INFO: Pod "pod-secrets-ba5ecc87-b432-471d-a875-50d0479e03de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.411672054s
STEP: Saw pod success
Feb 17 14:11:52.597: INFO: Pod "pod-secrets-ba5ecc87-b432-471d-a875-50d0479e03de" satisfied condition "success or failure"
Feb 17 14:11:52.599: INFO: Trying to get logs from node iruya-node pod pod-secrets-ba5ecc87-b432-471d-a875-50d0479e03de container secret-volume-test: 
STEP: delete the pod
Feb 17 14:11:52.745: INFO: Waiting for pod pod-secrets-ba5ecc87-b432-471d-a875-50d0479e03de to disappear
Feb 17 14:11:52.754: INFO: Pod pod-secrets-ba5ecc87-b432-471d-a875-50d0479e03de no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:11:52.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5826" for this suite.
Feb 17 14:11:58.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:11:59.052: INFO: namespace secrets-5826 deletion completed in 6.291981748s

• [SLOW TEST:17.086 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:11:59.053: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 17 14:11:59.144: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:12:09.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3312" for this suite.
Feb 17 14:13:01.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:13:01.936: INFO: namespace pods-3312 deletion completed in 52.213619156s

• [SLOW TEST:62.882 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:13:01.936: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Feb 17 14:13:02.052: INFO: Waiting up to 5m0s for pod "client-containers-302ddc8d-1a3a-4230-8190-a6669797dc92" in namespace "containers-611" to be "success or failure"
Feb 17 14:13:02.064: INFO: Pod "client-containers-302ddc8d-1a3a-4230-8190-a6669797dc92": Phase="Pending", Reason="", readiness=false. Elapsed: 12.515918ms
Feb 17 14:13:04.076: INFO: Pod "client-containers-302ddc8d-1a3a-4230-8190-a6669797dc92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024138651s
Feb 17 14:13:06.293: INFO: Pod "client-containers-302ddc8d-1a3a-4230-8190-a6669797dc92": Phase="Pending", Reason="", readiness=false. Elapsed: 4.24105177s
Feb 17 14:13:08.301: INFO: Pod "client-containers-302ddc8d-1a3a-4230-8190-a6669797dc92": Phase="Pending", Reason="", readiness=false. Elapsed: 6.249274632s
Feb 17 14:13:10.316: INFO: Pod "client-containers-302ddc8d-1a3a-4230-8190-a6669797dc92": Phase="Pending", Reason="", readiness=false. Elapsed: 8.263841939s
Feb 17 14:13:12.322: INFO: Pod "client-containers-302ddc8d-1a3a-4230-8190-a6669797dc92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.270690095s
STEP: Saw pod success
Feb 17 14:13:12.322: INFO: Pod "client-containers-302ddc8d-1a3a-4230-8190-a6669797dc92" satisfied condition "success or failure"
Feb 17 14:13:12.325: INFO: Trying to get logs from node iruya-node pod client-containers-302ddc8d-1a3a-4230-8190-a6669797dc92 container test-container: 
STEP: delete the pod
Feb 17 14:13:12.480: INFO: Waiting for pod client-containers-302ddc8d-1a3a-4230-8190-a6669797dc92 to disappear
Feb 17 14:13:12.505: INFO: Pod client-containers-302ddc8d-1a3a-4230-8190-a6669797dc92 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:13:12.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-611" for this suite.
Feb 17 14:13:18.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:13:19.064: INFO: namespace containers-611 deletion completed in 6.551246818s

• [SLOW TEST:17.128 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:13:19.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 17 14:13:47.256: INFO: Container started at 2020-02-17 14:13:28 +0000 UTC, pod became ready at 2020-02-17 14:13:46 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:13:47.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4325" for this suite.
Feb 17 14:14:09.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:14:09.483: INFO: namespace container-probe-4325 deletion completed in 22.222641554s

• [SLOW TEST:50.419 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:14:09.484: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 17 14:14:09.599: INFO: Waiting up to 5m0s for pod "downwardapi-volume-27f7646a-833d-43fe-ba55-0ebba945d8a7" in namespace "projected-6779" to be "success or failure"
Feb 17 14:14:09.641: INFO: Pod "downwardapi-volume-27f7646a-833d-43fe-ba55-0ebba945d8a7": Phase="Pending", Reason="", readiness=false. Elapsed: 42.070672ms
Feb 17 14:14:11.651: INFO: Pod "downwardapi-volume-27f7646a-833d-43fe-ba55-0ebba945d8a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052042684s
Feb 17 14:14:13.661: INFO: Pod "downwardapi-volume-27f7646a-833d-43fe-ba55-0ebba945d8a7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062345177s
Feb 17 14:14:15.674: INFO: Pod "downwardapi-volume-27f7646a-833d-43fe-ba55-0ebba945d8a7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074926671s
Feb 17 14:14:17.680: INFO: Pod "downwardapi-volume-27f7646a-833d-43fe-ba55-0ebba945d8a7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.081740196s
Feb 17 14:14:19.689: INFO: Pod "downwardapi-volume-27f7646a-833d-43fe-ba55-0ebba945d8a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.090509307s
STEP: Saw pod success
Feb 17 14:14:19.689: INFO: Pod "downwardapi-volume-27f7646a-833d-43fe-ba55-0ebba945d8a7" satisfied condition "success or failure"
Feb 17 14:14:19.694: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-27f7646a-833d-43fe-ba55-0ebba945d8a7 container client-container: 
STEP: delete the pod
Feb 17 14:14:19.756: INFO: Waiting for pod downwardapi-volume-27f7646a-833d-43fe-ba55-0ebba945d8a7 to disappear
Feb 17 14:14:19.765: INFO: Pod downwardapi-volume-27f7646a-833d-43fe-ba55-0ebba945d8a7 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:14:19.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6779" for this suite.
Feb 17 14:14:25.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:14:26.029: INFO: namespace projected-6779 deletion completed in 6.189524266s

• [SLOW TEST:16.545 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:14:26.029: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Feb 17 14:14:26.211: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2158,SelfLink:/api/v1/namespaces/watch-2158/configmaps/e2e-watch-test-configmap-a,UID:5e5b6394-2222-40c9-97f3-c8c525636ec0,ResourceVersion:24706596,Generation:0,CreationTimestamp:2020-02-17 14:14:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 17 14:14:26.212: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2158,SelfLink:/api/v1/namespaces/watch-2158/configmaps/e2e-watch-test-configmap-a,UID:5e5b6394-2222-40c9-97f3-c8c525636ec0,ResourceVersion:24706596,Generation:0,CreationTimestamp:2020-02-17 14:14:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Feb 17 14:14:36.230: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2158,SelfLink:/api/v1/namespaces/watch-2158/configmaps/e2e-watch-test-configmap-a,UID:5e5b6394-2222-40c9-97f3-c8c525636ec0,ResourceVersion:24706610,Generation:0,CreationTimestamp:2020-02-17 14:14:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb 17 14:14:36.232: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2158,SelfLink:/api/v1/namespaces/watch-2158/configmaps/e2e-watch-test-configmap-a,UID:5e5b6394-2222-40c9-97f3-c8c525636ec0,ResourceVersion:24706610,Generation:0,CreationTimestamp:2020-02-17 14:14:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Feb 17 14:14:46.246: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2158,SelfLink:/api/v1/namespaces/watch-2158/configmaps/e2e-watch-test-configmap-a,UID:5e5b6394-2222-40c9-97f3-c8c525636ec0,ResourceVersion:24706625,Generation:0,CreationTimestamp:2020-02-17 14:14:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 17 14:14:46.246: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2158,SelfLink:/api/v1/namespaces/watch-2158/configmaps/e2e-watch-test-configmap-a,UID:5e5b6394-2222-40c9-97f3-c8c525636ec0,ResourceVersion:24706625,Generation:0,CreationTimestamp:2020-02-17 14:14:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Feb 17 14:14:56.259: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2158,SelfLink:/api/v1/namespaces/watch-2158/configmaps/e2e-watch-test-configmap-a,UID:5e5b6394-2222-40c9-97f3-c8c525636ec0,ResourceVersion:24706639,Generation:0,CreationTimestamp:2020-02-17 14:14:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 17 14:14:56.259: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2158,SelfLink:/api/v1/namespaces/watch-2158/configmaps/e2e-watch-test-configmap-a,UID:5e5b6394-2222-40c9-97f3-c8c525636ec0,ResourceVersion:24706639,Generation:0,CreationTimestamp:2020-02-17 14:14:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Feb 17 14:15:06.278: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2158,SelfLink:/api/v1/namespaces/watch-2158/configmaps/e2e-watch-test-configmap-b,UID:221b9bf7-5f18-4dea-84de-3af68f98c594,ResourceVersion:24706653,Generation:0,CreationTimestamp:2020-02-17 14:15:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 17 14:15:06.278: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2158,SelfLink:/api/v1/namespaces/watch-2158/configmaps/e2e-watch-test-configmap-b,UID:221b9bf7-5f18-4dea-84de-3af68f98c594,ResourceVersion:24706653,Generation:0,CreationTimestamp:2020-02-17 14:15:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Feb 17 14:15:16.295: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2158,SelfLink:/api/v1/namespaces/watch-2158/configmaps/e2e-watch-test-configmap-b,UID:221b9bf7-5f18-4dea-84de-3af68f98c594,ResourceVersion:24706668,Generation:0,CreationTimestamp:2020-02-17 14:15:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 17 14:15:16.296: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2158,SelfLink:/api/v1/namespaces/watch-2158/configmaps/e2e-watch-test-configmap-b,UID:221b9bf7-5f18-4dea-84de-3af68f98c594,ResourceVersion:24706668,Generation:0,CreationTimestamp:2020-02-17 14:15:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:15:26.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2158" for this suite.
Feb 17 14:15:32.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:15:32.513: INFO: namespace watch-2158 deletion completed in 6.207215311s

• [SLOW TEST:66.484 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:15:32.515: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-822388d9-e9a4-445d-948a-f31d40131dc9
STEP: Creating a pod to test consume secrets
Feb 17 14:15:32.670: INFO: Waiting up to 5m0s for pod "pod-secrets-47c8178a-ea37-4c9d-a518-0cd76d07e060" in namespace "secrets-1678" to be "success or failure"
Feb 17 14:15:32.680: INFO: Pod "pod-secrets-47c8178a-ea37-4c9d-a518-0cd76d07e060": Phase="Pending", Reason="", readiness=false. Elapsed: 9.441246ms
Feb 17 14:15:34.686: INFO: Pod "pod-secrets-47c8178a-ea37-4c9d-a518-0cd76d07e060": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015812049s
Feb 17 14:15:36.696: INFO: Pod "pod-secrets-47c8178a-ea37-4c9d-a518-0cd76d07e060": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026018681s
Feb 17 14:15:38.714: INFO: Pod "pod-secrets-47c8178a-ea37-4c9d-a518-0cd76d07e060": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043462435s
Feb 17 14:15:40.728: INFO: Pod "pod-secrets-47c8178a-ea37-4c9d-a518-0cd76d07e060": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057416314s
STEP: Saw pod success
Feb 17 14:15:40.728: INFO: Pod "pod-secrets-47c8178a-ea37-4c9d-a518-0cd76d07e060" satisfied condition "success or failure"
Feb 17 14:15:40.734: INFO: Trying to get logs from node iruya-node pod pod-secrets-47c8178a-ea37-4c9d-a518-0cd76d07e060 container secret-volume-test: 
STEP: delete the pod
Feb 17 14:15:40.912: INFO: Waiting for pod pod-secrets-47c8178a-ea37-4c9d-a518-0cd76d07e060 to disappear
Feb 17 14:15:40.923: INFO: Pod pod-secrets-47c8178a-ea37-4c9d-a518-0cd76d07e060 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:15:40.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1678" for this suite.
Feb 17 14:15:46.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:15:47.119: INFO: namespace secrets-1678 deletion completed in 6.186802901s

• [SLOW TEST:14.604 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:15:47.120: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:15:58.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-9496" for this suite.
Feb 17 14:16:20.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:16:20.436: INFO: namespace replication-controller-9496 deletion completed in 22.204785818s

• [SLOW TEST:33.316 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:16:20.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-f89db969-ff2a-45fe-b41e-796abb31317e
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:16:20.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4462" for this suite.
Feb 17 14:16:26.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:16:26.836: INFO: namespace secrets-4462 deletion completed in 6.217201738s

• [SLOW TEST:6.399 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:16:26.836: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 17 14:16:26.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-2148'
Feb 17 14:16:27.057: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 17 14:16:27.057: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Feb 17 14:16:31.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-2148'
Feb 17 14:16:31.200: INFO: stderr: ""
Feb 17 14:16:31.200: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:16:31.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2148" for this suite.
Feb 17 14:16:37.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:16:37.398: INFO: namespace kubectl-2148 deletion completed in 6.193646303s

• [SLOW TEST:10.561 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:16:37.398: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 17 14:16:37.533: INFO: Creating deployment "test-recreate-deployment"
Feb 17 14:16:37.542: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Feb 17 14:16:37.567: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Feb 17 14:16:39.585: INFO: Waiting deployment "test-recreate-deployment" to complete
Feb 17 14:16:39.592: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717545797, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717545797, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717545797, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717545797, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 17 14:16:41.599: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717545797, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717545797, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717545797, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717545797, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 17 14:16:43.605: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717545797, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717545797, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717545797, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717545797, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 17 14:16:45.599: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717545797, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717545797, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717545797, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717545797, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 17 14:16:47.612: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Feb 17 14:16:47.625: INFO: Updating deployment test-recreate-deployment
Feb 17 14:16:47.625: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb 17 14:16:48.106: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-9901,SelfLink:/apis/apps/v1/namespaces/deployment-9901/deployments/test-recreate-deployment,UID:e6dfc4cd-64f5-4062-bc36-0b3fabe8ea5b,ResourceVersion:24706933,Generation:2,CreationTimestamp:2020-02-17 14:16:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-02-17 14:16:47 +0000 UTC 2020-02-17 14:16:47 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-17 14:16:48 +0000 UTC 2020-02-17 14:16:37 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Feb 17 14:16:48.235: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-9901,SelfLink:/apis/apps/v1/namespaces/deployment-9901/replicasets/test-recreate-deployment-5c8c9cc69d,UID:68fcc0fc-f855-4255-afab-737804bb8294,ResourceVersion:24706931,Generation:1,CreationTimestamp:2020-02-17 14:16:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment e6dfc4cd-64f5-4062-bc36-0b3fabe8ea5b 0xc0016cde47 0xc0016cde48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 17 14:16:48.235: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Feb 17 14:16:48.236: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-9901,SelfLink:/apis/apps/v1/namespaces/deployment-9901/replicasets/test-recreate-deployment-6df85df6b9,UID:459978fb-13eb-4760-a9c6-9bb2feeb2c64,ResourceVersion:24706921,Generation:2,CreationTimestamp:2020-02-17 14:16:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment e6dfc4cd-64f5-4062-bc36-0b3fabe8ea5b 0xc001dfe057 0xc001dfe058}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 17 14:16:48.247: INFO: Pod "test-recreate-deployment-5c8c9cc69d-qrpw7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-qrpw7,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-9901,SelfLink:/api/v1/namespaces/deployment-9901/pods/test-recreate-deployment-5c8c9cc69d-qrpw7,UID:8596ed30-ad42-4257-87ac-7c5802268649,ResourceVersion:24706932,Generation:0,CreationTimestamp:2020-02-17 14:16:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 68fcc0fc-f855-4255-afab-737804bb8294 0xc002195bf7 0xc002195bf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-h52wg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-h52wg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-h52wg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002195c70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002195c90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 14:16:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 14:16:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 14:16:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 14:16:47 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-17 14:16:48 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:16:48.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9901" for this suite.
Feb 17 14:16:54.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:16:54.454: INFO: namespace deployment-9901 deletion completed in 6.202603912s

• [SLOW TEST:17.057 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:16:54.455: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 17 14:17:08.362: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:17:08.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-587" for this suite.
Feb 17 14:17:14.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:17:14.720: INFO: namespace container-runtime-587 deletion completed in 6.281986953s

• [SLOW TEST:20.266 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:17:14.721: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Feb 17 14:17:14.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3729'
Feb 17 14:17:15.317: INFO: stderr: ""
Feb 17 14:17:15.317: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 17 14:17:15.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3729'
Feb 17 14:17:15.513: INFO: stderr: ""
Feb 17 14:17:15.513: INFO: stdout: "update-demo-nautilus-8mqlr update-demo-nautilus-dfb8j "
Feb 17 14:17:15.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8mqlr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3729'
Feb 17 14:17:15.608: INFO: stderr: ""
Feb 17 14:17:15.608: INFO: stdout: ""
Feb 17 14:17:15.608: INFO: update-demo-nautilus-8mqlr is created but not running
Feb 17 14:17:20.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3729'
Feb 17 14:17:21.873: INFO: stderr: ""
Feb 17 14:17:21.873: INFO: stdout: "update-demo-nautilus-8mqlr update-demo-nautilus-dfb8j "
Feb 17 14:17:21.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8mqlr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3729'
Feb 17 14:17:22.486: INFO: stderr: ""
Feb 17 14:17:22.487: INFO: stdout: ""
Feb 17 14:17:22.487: INFO: update-demo-nautilus-8mqlr is created but not running
Feb 17 14:17:27.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3729'
Feb 17 14:17:27.674: INFO: stderr: ""
Feb 17 14:17:27.674: INFO: stdout: "update-demo-nautilus-8mqlr update-demo-nautilus-dfb8j "
Feb 17 14:17:27.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8mqlr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3729'
Feb 17 14:17:27.826: INFO: stderr: ""
Feb 17 14:17:27.826: INFO: stdout: "true"
Feb 17 14:17:27.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8mqlr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3729'
Feb 17 14:17:27.962: INFO: stderr: ""
Feb 17 14:17:27.963: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 17 14:17:27.963: INFO: validating pod update-demo-nautilus-8mqlr
Feb 17 14:17:27.969: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 17 14:17:27.969: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 17 14:17:27.969: INFO: update-demo-nautilus-8mqlr is verified up and running
Feb 17 14:17:27.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dfb8j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3729'
Feb 17 14:17:28.072: INFO: stderr: ""
Feb 17 14:17:28.073: INFO: stdout: "true"
Feb 17 14:17:28.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dfb8j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3729'
Feb 17 14:17:28.186: INFO: stderr: ""
Feb 17 14:17:28.186: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 17 14:17:28.186: INFO: validating pod update-demo-nautilus-dfb8j
Feb 17 14:17:28.209: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 17 14:17:28.210: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 17 14:17:28.210: INFO: update-demo-nautilus-dfb8j is verified up and running
STEP: rolling-update to new replication controller
Feb 17 14:17:28.212: INFO: scanned /root for discovery docs: 
Feb 17 14:17:28.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-3729'
Feb 17 14:18:04.002: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb 17 14:18:04.003: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 17 14:18:04.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3729'
Feb 17 14:18:04.135: INFO: stderr: ""
Feb 17 14:18:04.135: INFO: stdout: "update-demo-kitten-ctzw9 update-demo-kitten-ng4pm update-demo-nautilus-8mqlr "
STEP: Replicas for name=update-demo: expected=2 actual=3
Feb 17 14:18:09.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3729'
Feb 17 14:18:09.243: INFO: stderr: ""
Feb 17 14:18:09.244: INFO: stdout: "update-demo-kitten-ctzw9 update-demo-kitten-ng4pm "
Feb 17 14:18:09.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ctzw9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3729'
Feb 17 14:18:09.352: INFO: stderr: ""
Feb 17 14:18:09.352: INFO: stdout: "true"
Feb 17 14:18:09.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ctzw9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3729'
Feb 17 14:18:09.433: INFO: stderr: ""
Feb 17 14:18:09.433: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb 17 14:18:09.433: INFO: validating pod update-demo-kitten-ctzw9
Feb 17 14:18:09.446: INFO: got data: {
  "image": "kitten.jpg"
}

Feb 17 14:18:09.446: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb 17 14:18:09.446: INFO: update-demo-kitten-ctzw9 is verified up and running
Feb 17 14:18:09.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ng4pm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3729'
Feb 17 14:18:09.559: INFO: stderr: ""
Feb 17 14:18:09.559: INFO: stdout: "true"
Feb 17 14:18:09.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ng4pm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3729'
Feb 17 14:18:09.676: INFO: stderr: ""
Feb 17 14:18:09.676: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb 17 14:18:09.676: INFO: validating pod update-demo-kitten-ng4pm
Feb 17 14:18:09.696: INFO: got data: {
  "image": "kitten.jpg"
}

Feb 17 14:18:09.696: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb 17 14:18:09.696: INFO: update-demo-kitten-ng4pm is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:18:09.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3729" for this suite.
Feb 17 14:18:33.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:18:33.853: INFO: namespace kubectl-3729 deletion completed in 24.146648292s

• [SLOW TEST:79.132 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:18:33.853: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-f72af2a0-cafc-4667-bdb9-fa1fa9960f1a
STEP: Creating a pod to test consume secrets
Feb 17 14:18:34.121: INFO: Waiting up to 5m0s for pod "pod-secrets-a43073df-ac41-4e04-a01a-5b17faed14c7" in namespace "secrets-8492" to be "success or failure"
Feb 17 14:18:34.153: INFO: Pod "pod-secrets-a43073df-ac41-4e04-a01a-5b17faed14c7": Phase="Pending", Reason="", readiness=false. Elapsed: 31.667531ms
Feb 17 14:18:36.160: INFO: Pod "pod-secrets-a43073df-ac41-4e04-a01a-5b17faed14c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039213246s
Feb 17 14:18:38.166: INFO: Pod "pod-secrets-a43073df-ac41-4e04-a01a-5b17faed14c7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044845133s
Feb 17 14:18:40.175: INFO: Pod "pod-secrets-a43073df-ac41-4e04-a01a-5b17faed14c7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054323421s
Feb 17 14:18:42.187: INFO: Pod "pod-secrets-a43073df-ac41-4e04-a01a-5b17faed14c7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065773455s
Feb 17 14:18:44.194: INFO: Pod "pod-secrets-a43073df-ac41-4e04-a01a-5b17faed14c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.072429946s
STEP: Saw pod success
Feb 17 14:18:44.194: INFO: Pod "pod-secrets-a43073df-ac41-4e04-a01a-5b17faed14c7" satisfied condition "success or failure"
Feb 17 14:18:44.197: INFO: Trying to get logs from node iruya-node pod pod-secrets-a43073df-ac41-4e04-a01a-5b17faed14c7 container secret-volume-test: 
STEP: delete the pod
Feb 17 14:18:44.243: INFO: Waiting for pod pod-secrets-a43073df-ac41-4e04-a01a-5b17faed14c7 to disappear
Feb 17 14:18:44.342: INFO: Pod pod-secrets-a43073df-ac41-4e04-a01a-5b17faed14c7 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:18:44.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8492" for this suite.
Feb 17 14:18:52.379: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:18:52.524: INFO: namespace secrets-8492 deletion completed in 8.172077961s
STEP: Destroying namespace "secret-namespace-4256" for this suite.
Feb 17 14:18:58.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:18:58.748: INFO: namespace secret-namespace-4256 deletion completed in 6.224701192s

• [SLOW TEST:24.895 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:18:58.749: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 17 14:19:21.064: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 17 14:19:21.076: INFO: Pod pod-with-prestop-http-hook still exists
Feb 17 14:19:23.076: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 17 14:19:23.084: INFO: Pod pod-with-prestop-http-hook still exists
Feb 17 14:19:25.076: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 17 14:19:25.084: INFO: Pod pod-with-prestop-http-hook still exists
Feb 17 14:19:27.076: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 17 14:19:27.085: INFO: Pod pod-with-prestop-http-hook still exists
Feb 17 14:19:29.076: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 17 14:19:29.083: INFO: Pod pod-with-prestop-http-hook still exists
Feb 17 14:19:31.076: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 17 14:19:31.083: INFO: Pod pod-with-prestop-http-hook still exists
Feb 17 14:19:33.076: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 17 14:19:33.083: INFO: Pod pod-with-prestop-http-hook still exists
Feb 17 14:19:35.076: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 17 14:19:35.084: INFO: Pod pod-with-prestop-http-hook still exists
Feb 17 14:19:37.076: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 17 14:19:37.088: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:19:37.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1649" for this suite.
Feb 17 14:19:59.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:19:59.295: INFO: namespace container-lifecycle-hook-1649 deletion completed in 22.170033923s

• [SLOW TEST:60.546 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:19:59.298: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Feb 17 14:19:59.471: INFO: Waiting up to 5m0s for pod "var-expansion-eb6532c2-3a5f-45dc-bd80-98b24fbb3e7c" in namespace "var-expansion-6556" to be "success or failure"
Feb 17 14:19:59.477: INFO: Pod "var-expansion-eb6532c2-3a5f-45dc-bd80-98b24fbb3e7c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.109056ms
Feb 17 14:20:01.488: INFO: Pod "var-expansion-eb6532c2-3a5f-45dc-bd80-98b24fbb3e7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017582849s
Feb 17 14:20:03.500: INFO: Pod "var-expansion-eb6532c2-3a5f-45dc-bd80-98b24fbb3e7c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0296601s
Feb 17 14:20:05.508: INFO: Pod "var-expansion-eb6532c2-3a5f-45dc-bd80-98b24fbb3e7c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037248735s
Feb 17 14:20:07.517: INFO: Pod "var-expansion-eb6532c2-3a5f-45dc-bd80-98b24fbb3e7c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.04674171s
Feb 17 14:20:09.526: INFO: Pod "var-expansion-eb6532c2-3a5f-45dc-bd80-98b24fbb3e7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.054997165s
STEP: Saw pod success
Feb 17 14:20:09.526: INFO: Pod "var-expansion-eb6532c2-3a5f-45dc-bd80-98b24fbb3e7c" satisfied condition "success or failure"
Feb 17 14:20:09.536: INFO: Trying to get logs from node iruya-node pod var-expansion-eb6532c2-3a5f-45dc-bd80-98b24fbb3e7c container dapi-container: 
STEP: delete the pod
Feb 17 14:20:09.638: INFO: Waiting for pod var-expansion-eb6532c2-3a5f-45dc-bd80-98b24fbb3e7c to disappear
Feb 17 14:20:09.683: INFO: Pod var-expansion-eb6532c2-3a5f-45dc-bd80-98b24fbb3e7c no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:20:09.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-6556" for this suite.
Feb 17 14:20:15.709: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:20:15.829: INFO: namespace var-expansion-6556 deletion completed in 6.142041092s

• [SLOW TEST:16.532 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:20:15.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb 17 14:20:15.949: INFO: Waiting up to 5m0s for pod "downward-api-f652a209-228b-4792-97b7-f4c02d2b57d9" in namespace "downward-api-8777" to be "success or failure"
Feb 17 14:20:15.979: INFO: Pod "downward-api-f652a209-228b-4792-97b7-f4c02d2b57d9": Phase="Pending", Reason="", readiness=false. Elapsed: 29.601905ms
Feb 17 14:20:17.986: INFO: Pod "downward-api-f652a209-228b-4792-97b7-f4c02d2b57d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036061975s
Feb 17 14:20:19.995: INFO: Pod "downward-api-f652a209-228b-4792-97b7-f4c02d2b57d9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044839289s
Feb 17 14:20:22.002: INFO: Pod "downward-api-f652a209-228b-4792-97b7-f4c02d2b57d9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052663342s
Feb 17 14:20:24.012: INFO: Pod "downward-api-f652a209-228b-4792-97b7-f4c02d2b57d9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062642059s
Feb 17 14:20:26.019: INFO: Pod "downward-api-f652a209-228b-4792-97b7-f4c02d2b57d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.069161903s
STEP: Saw pod success
Feb 17 14:20:26.019: INFO: Pod "downward-api-f652a209-228b-4792-97b7-f4c02d2b57d9" satisfied condition "success or failure"
Feb 17 14:20:26.022: INFO: Trying to get logs from node iruya-node pod downward-api-f652a209-228b-4792-97b7-f4c02d2b57d9 container dapi-container: 
STEP: delete the pod
Feb 17 14:20:26.069: INFO: Waiting for pod downward-api-f652a209-228b-4792-97b7-f4c02d2b57d9 to disappear
Feb 17 14:20:26.179: INFO: Pod downward-api-f652a209-228b-4792-97b7-f4c02d2b57d9 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:20:26.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8777" for this suite.
Feb 17 14:20:32.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:20:32.360: INFO: namespace downward-api-8777 deletion completed in 6.172818175s

• [SLOW TEST:16.531 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:20:32.361: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 17 14:20:42.712: INFO: Waiting up to 5m0s for pod "client-envvars-334d8ff0-15b2-4869-ac79-641bf84ef42d" in namespace "pods-6093" to be "success or failure"
Feb 17 14:20:42.738: INFO: Pod "client-envvars-334d8ff0-15b2-4869-ac79-641bf84ef42d": Phase="Pending", Reason="", readiness=false. Elapsed: 25.615974ms
Feb 17 14:20:44.747: INFO: Pod "client-envvars-334d8ff0-15b2-4869-ac79-641bf84ef42d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035457332s
Feb 17 14:20:46.756: INFO: Pod "client-envvars-334d8ff0-15b2-4869-ac79-641bf84ef42d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044454184s
Feb 17 14:20:48.783: INFO: Pod "client-envvars-334d8ff0-15b2-4869-ac79-641bf84ef42d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070633248s
Feb 17 14:20:50.797: INFO: Pod "client-envvars-334d8ff0-15b2-4869-ac79-641bf84ef42d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.085012333s
Feb 17 14:20:52.807: INFO: Pod "client-envvars-334d8ff0-15b2-4869-ac79-641bf84ef42d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.094631049s
STEP: Saw pod success
Feb 17 14:20:52.807: INFO: Pod "client-envvars-334d8ff0-15b2-4869-ac79-641bf84ef42d" satisfied condition "success or failure"
Feb 17 14:20:52.811: INFO: Trying to get logs from node iruya-node pod client-envvars-334d8ff0-15b2-4869-ac79-641bf84ef42d container env3cont: 
STEP: delete the pod
Feb 17 14:20:52.927: INFO: Waiting for pod client-envvars-334d8ff0-15b2-4869-ac79-641bf84ef42d to disappear
Feb 17 14:20:53.060: INFO: Pod client-envvars-334d8ff0-15b2-4869-ac79-641bf84ef42d no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:20:53.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6093" for this suite.
Feb 17 14:21:37.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:21:37.290: INFO: namespace pods-6093 deletion completed in 44.210419192s

• [SLOW TEST:64.929 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:21:37.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Feb 17 14:21:46.750: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:21:47.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-424" for this suite.
Feb 17 14:22:11.916: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:22:12.022: INFO: namespace replicaset-424 deletion completed in 24.19212565s

• [SLOW TEST:34.732 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:22:12.023: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Feb 17 14:22:12.208: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 17 14:22:12.298: INFO: Waiting for terminating namespaces to be deleted...
Feb 17 14:22:12.301: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Feb 17 14:22:12.314: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Feb 17 14:22:12.314: INFO: 	Container weave ready: true, restart count 0
Feb 17 14:22:12.314: INFO: 	Container weave-npc ready: true, restart count 0
Feb 17 14:22:12.314: INFO: kube-bench-j7kcs from default started at 2020-02-11 06:42:30 +0000 UTC (1 container statuses recorded)
Feb 17 14:22:12.314: INFO: 	Container kube-bench ready: false, restart count 0
Feb 17 14:22:12.314: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Feb 17 14:22:12.314: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 17 14:22:12.314: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Feb 17 14:22:12.324: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Feb 17 14:22:12.324: INFO: 	Container kube-apiserver ready: true, restart count 0
Feb 17 14:22:12.324: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Feb 17 14:22:12.324: INFO: 	Container kube-scheduler ready: true, restart count 15
Feb 17 14:22:12.324: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb 17 14:22:12.324: INFO: 	Container coredns ready: true, restart count 0
Feb 17 14:22:12.324: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Feb 17 14:22:12.324: INFO: 	Container etcd ready: true, restart count 0
Feb 17 14:22:12.324: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Feb 17 14:22:12.324: INFO: 	Container weave ready: true, restart count 0
Feb 17 14:22:12.324: INFO: 	Container weave-npc ready: true, restart count 0
Feb 17 14:22:12.324: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb 17 14:22:12.324: INFO: 	Container coredns ready: true, restart count 0
Feb 17 14:22:12.325: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Feb 17 14:22:12.325: INFO: 	Container kube-controller-manager ready: true, restart count 23
Feb 17 14:22:12.325: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Feb 17 14:22:12.325: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15f436894774198e], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:22:13.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-642" for this suite.
Feb 17 14:22:19.399: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:22:19.537: INFO: namespace sched-pred-642 deletion completed in 6.166810473s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.514 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:22:19.538: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Feb 17 14:22:25.896: INFO: 0 pods remaining
Feb 17 14:22:25.896: INFO: 0 pods has nil DeletionTimestamp
Feb 17 14:22:25.896: INFO: 
STEP: Gathering metrics
W0217 14:22:26.781999       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 17 14:22:26.782: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:22:26.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6298" for this suite.
Feb 17 14:22:37.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:22:37.182: INFO: namespace gc-6298 deletion completed in 10.393532936s

• [SLOW TEST:17.644 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:22:37.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Feb 17 14:22:37.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9642'
Feb 17 14:22:37.592: INFO: stderr: ""
Feb 17 14:22:37.592: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb 17 14:22:38.609: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 14:22:38.609: INFO: Found 0 / 1
Feb 17 14:22:39.623: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 14:22:39.623: INFO: Found 0 / 1
Feb 17 14:22:40.706: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 14:22:40.706: INFO: Found 0 / 1
Feb 17 14:22:41.612: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 14:22:41.612: INFO: Found 0 / 1
Feb 17 14:22:42.604: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 14:22:42.604: INFO: Found 0 / 1
Feb 17 14:22:43.619: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 14:22:43.619: INFO: Found 0 / 1
Feb 17 14:22:44.600: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 14:22:44.600: INFO: Found 0 / 1
Feb 17 14:22:45.602: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 14:22:45.602: INFO: Found 0 / 1
Feb 17 14:22:46.605: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 14:22:46.605: INFO: Found 0 / 1
Feb 17 14:22:47.660: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 14:22:47.660: INFO: Found 1 / 1
Feb 17 14:22:47.660: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Feb 17 14:22:47.666: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 14:22:47.666: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 17 14:22:47.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-c6fgb --namespace=kubectl-9642 -p {"metadata":{"annotations":{"x":"y"}}}'
Feb 17 14:22:47.836: INFO: stderr: ""
Feb 17 14:22:47.836: INFO: stdout: "pod/redis-master-c6fgb patched\n"
STEP: checking annotations
Feb 17 14:22:47.845: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 14:22:47.845: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:22:47.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9642" for this suite.
Feb 17 14:23:09.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:23:10.117: INFO: namespace kubectl-9642 deletion completed in 22.212734003s

• [SLOW TEST:32.934 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:23:10.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0217 14:23:50.790260       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 17 14:23:50.790: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:23:50.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2853" for this suite.
Feb 17 14:24:00.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:24:01.078: INFO: namespace gc-2853 deletion completed in 10.283444561s

• [SLOW TEST:50.961 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:24:01.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Feb 17 14:24:04.972: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-4976,SelfLink:/api/v1/namespaces/watch-4976/configmaps/e2e-watch-test-resource-version,UID:1ee4d209-3444-4497-a223-3c517dff6c7c,ResourceVersion:24708230,Generation:0,CreationTimestamp:2020-02-17 14:24:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 17 14:24:04.973: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-4976,SelfLink:/api/v1/namespaces/watch-4976/configmaps/e2e-watch-test-resource-version,UID:1ee4d209-3444-4497-a223-3c517dff6c7c,ResourceVersion:24708234,Generation:0,CreationTimestamp:2020-02-17 14:24:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:24:04.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4976" for this suite.
Feb 17 14:24:13.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:24:13.589: INFO: namespace watch-4976 deletion completed in 8.604174225s

• [SLOW TEST:12.511 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:24:13.590: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-acba78ff-ccc6-4815-abb8-73a7e3b2f562
STEP: Creating a pod to test consume configMaps
Feb 17 14:24:13.777: INFO: Waiting up to 5m0s for pod "pod-configmaps-cd7e5906-542d-499b-a80c-2b817ffbf096" in namespace "configmap-1360" to be "success or failure"
Feb 17 14:24:13.794: INFO: Pod "pod-configmaps-cd7e5906-542d-499b-a80c-2b817ffbf096": Phase="Pending", Reason="", readiness=false. Elapsed: 17.360095ms
Feb 17 14:24:15.802: INFO: Pod "pod-configmaps-cd7e5906-542d-499b-a80c-2b817ffbf096": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025573159s
Feb 17 14:24:17.818: INFO: Pod "pod-configmaps-cd7e5906-542d-499b-a80c-2b817ffbf096": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041239495s
Feb 17 14:24:19.828: INFO: Pod "pod-configmaps-cd7e5906-542d-499b-a80c-2b817ffbf096": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051014557s
Feb 17 14:24:21.872: INFO: Pod "pod-configmaps-cd7e5906-542d-499b-a80c-2b817ffbf096": Phase="Pending", Reason="", readiness=false. Elapsed: 8.09514066s
Feb 17 14:24:23.887: INFO: Pod "pod-configmaps-cd7e5906-542d-499b-a80c-2b817ffbf096": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.110305145s
STEP: Saw pod success
Feb 17 14:24:23.887: INFO: Pod "pod-configmaps-cd7e5906-542d-499b-a80c-2b817ffbf096" satisfied condition "success or failure"
Feb 17 14:24:23.897: INFO: Trying to get logs from node iruya-node pod pod-configmaps-cd7e5906-542d-499b-a80c-2b817ffbf096 container configmap-volume-test: 
STEP: delete the pod
Feb 17 14:24:23.978: INFO: Waiting for pod pod-configmaps-cd7e5906-542d-499b-a80c-2b817ffbf096 to disappear
Feb 17 14:24:23.986: INFO: Pod pod-configmaps-cd7e5906-542d-499b-a80c-2b817ffbf096 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:24:23.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1360" for this suite.
Feb 17 14:24:30.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:24:30.205: INFO: namespace configmap-1360 deletion completed in 6.210341353s

• [SLOW TEST:16.614 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:24:30.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:24:37.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-6635" for this suite.
Feb 17 14:24:43.360: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:24:43.501: INFO: namespace namespaces-6635 deletion completed in 6.176035805s
STEP: Destroying namespace "nsdeletetest-4512" for this suite.
Feb 17 14:24:43.505: INFO: Namespace nsdeletetest-4512 was already deleted
STEP: Destroying namespace "nsdeletetest-5093" for this suite.
Feb 17 14:24:49.533: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:24:49.694: INFO: namespace nsdeletetest-5093 deletion completed in 6.189938619s

• [SLOW TEST:19.490 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:24:49.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 17 14:24:49.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-9335'
Feb 17 14:24:49.969: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 17 14:24:49.969: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Feb 17 14:24:50.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-9335'
Feb 17 14:24:50.211: INFO: stderr: ""
Feb 17 14:24:50.211: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:24:50.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9335" for this suite.
Feb 17 14:25:12.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:25:12.325: INFO: namespace kubectl-9335 deletion completed in 22.097024046s

• [SLOW TEST:22.630 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:25:12.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb 17 14:28:15.728: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 17 14:28:15.757: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 17 14:28:17.757: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 17 14:28:17.767: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 17 14:28:19.757: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 17 14:28:19.769: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 17 14:28:21.757: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 17 14:28:21.766: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 17 14:28:23.757: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 17 14:28:23.769: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 17 14:28:25.757: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 17 14:28:25.765: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 17 14:28:27.757: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 17 14:28:27.768: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 17 14:28:29.757: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 17 14:28:29.773: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 17 14:28:31.757: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 17 14:28:31.763: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 17 14:28:33.757: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 17 14:28:33.766: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 17 14:28:35.757: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 17 14:28:35.764: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 17 14:28:37.757: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 17 14:28:37.768: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 17 14:28:39.757: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 17 14:28:39.767: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 17 14:28:41.757: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 17 14:28:41.763: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 17 14:28:43.757: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 17 14:28:43.765: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 17 14:28:45.757: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 17 14:28:45.765: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 17 14:28:47.757: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 17 14:28:47.764: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:28:47.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-3557" for this suite.
Feb 17 14:29:09.825: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:29:09.969: INFO: namespace container-lifecycle-hook-3557 deletion completed in 22.198245209s

• [SLOW TEST:237.644 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:29:09.969: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 17 14:29:10.065: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a0be09bf-819f-4631-9105-cc2847895436" in namespace "projected-1799" to be "success or failure"
Feb 17 14:29:10.108: INFO: Pod "downwardapi-volume-a0be09bf-819f-4631-9105-cc2847895436": Phase="Pending", Reason="", readiness=false. Elapsed: 41.983423ms
Feb 17 14:29:12.125: INFO: Pod "downwardapi-volume-a0be09bf-819f-4631-9105-cc2847895436": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059634565s
Feb 17 14:29:14.135: INFO: Pod "downwardapi-volume-a0be09bf-819f-4631-9105-cc2847895436": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069699932s
Feb 17 14:29:16.143: INFO: Pod "downwardapi-volume-a0be09bf-819f-4631-9105-cc2847895436": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077268476s
Feb 17 14:29:18.153: INFO: Pod "downwardapi-volume-a0be09bf-819f-4631-9105-cc2847895436": Phase="Pending", Reason="", readiness=false. Elapsed: 8.087083308s
Feb 17 14:29:20.162: INFO: Pod "downwardapi-volume-a0be09bf-819f-4631-9105-cc2847895436": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.096582537s
STEP: Saw pod success
Feb 17 14:29:20.162: INFO: Pod "downwardapi-volume-a0be09bf-819f-4631-9105-cc2847895436" satisfied condition "success or failure"
Feb 17 14:29:20.167: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a0be09bf-819f-4631-9105-cc2847895436 container client-container: 
STEP: delete the pod
Feb 17 14:29:20.313: INFO: Waiting for pod downwardapi-volume-a0be09bf-819f-4631-9105-cc2847895436 to disappear
Feb 17 14:29:20.325: INFO: Pod downwardapi-volume-a0be09bf-819f-4631-9105-cc2847895436 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:29:20.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1799" for this suite.
Feb 17 14:29:26.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:29:26.722: INFO: namespace projected-1799 deletion completed in 6.382009681s

• [SLOW TEST:16.753 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:29:26.724: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-4e3c9286-54c9-4702-9966-aafac68159a3
STEP: Creating secret with name s-test-opt-upd-6ac101a7-d0a1-4da7-b6f2-ef58f0dd6018
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-4e3c9286-54c9-4702-9966-aafac68159a3
STEP: Updating secret s-test-opt-upd-6ac101a7-d0a1-4da7-b6f2-ef58f0dd6018
STEP: Creating secret with name s-test-opt-create-1469be59-dba1-4346-8348-0ad89e94e216
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:30:48.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3035" for this suite.
Feb 17 14:31:12.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:31:12.978: INFO: namespace projected-3035 deletion completed in 24.16551525s

• [SLOW TEST:106.254 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:31:12.979: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Feb 17 14:31:13.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Feb 17 14:31:15.546: INFO: stderr: ""
Feb 17 14:31:15.546: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:31:15.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4715" for this suite.
Feb 17 14:31:21.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:31:21.753: INFO: namespace kubectl-4715 deletion completed in 6.19840763s

• [SLOW TEST:8.774 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:31:21.754: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:31:33.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-123" for this suite.
Feb 17 14:31:40.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:31:40.106: INFO: namespace kubelet-test-123 deletion completed in 6.123380638s

• [SLOW TEST:18.353 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:31:40.107: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Feb 17 14:31:40.204: INFO: Waiting up to 5m0s for pod "pod-7d672057-1048-450c-8d44-b420cb5cfcf6" in namespace "emptydir-6327" to be "success or failure"
Feb 17 14:31:40.208: INFO: Pod "pod-7d672057-1048-450c-8d44-b420cb5cfcf6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.978147ms
Feb 17 14:31:42.224: INFO: Pod "pod-7d672057-1048-450c-8d44-b420cb5cfcf6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019831243s
Feb 17 14:31:44.235: INFO: Pod "pod-7d672057-1048-450c-8d44-b420cb5cfcf6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03096697s
Feb 17 14:31:46.278: INFO: Pod "pod-7d672057-1048-450c-8d44-b420cb5cfcf6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073893002s
Feb 17 14:31:48.699: INFO: Pod "pod-7d672057-1048-450c-8d44-b420cb5cfcf6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.495378486s
Feb 17 14:31:50.710: INFO: Pod "pod-7d672057-1048-450c-8d44-b420cb5cfcf6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.506034682s
STEP: Saw pod success
Feb 17 14:31:50.710: INFO: Pod "pod-7d672057-1048-450c-8d44-b420cb5cfcf6" satisfied condition "success or failure"
Feb 17 14:31:50.713: INFO: Trying to get logs from node iruya-node pod pod-7d672057-1048-450c-8d44-b420cb5cfcf6 container test-container: 
STEP: delete the pod
Feb 17 14:31:50.872: INFO: Waiting for pod pod-7d672057-1048-450c-8d44-b420cb5cfcf6 to disappear
Feb 17 14:31:50.883: INFO: Pod pod-7d672057-1048-450c-8d44-b420cb5cfcf6 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:31:50.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6327" for this suite.
Feb 17 14:31:56.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:31:57.037: INFO: namespace emptydir-6327 deletion completed in 6.145588409s

• [SLOW TEST:16.930 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:31:57.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Feb 17 14:31:57.138: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Feb 17 14:31:57.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6850'
Feb 17 14:31:57.512: INFO: stderr: ""
Feb 17 14:31:57.512: INFO: stdout: "service/redis-slave created\n"
Feb 17 14:31:57.512: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Feb 17 14:31:57.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6850'
Feb 17 14:31:58.019: INFO: stderr: ""
Feb 17 14:31:58.019: INFO: stdout: "service/redis-master created\n"
Feb 17 14:31:58.020: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Feb 17 14:31:58.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6850'
Feb 17 14:31:58.595: INFO: stderr: ""
Feb 17 14:31:58.595: INFO: stdout: "service/frontend created\n"
Feb 17 14:31:58.596: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Feb 17 14:31:58.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6850'
Feb 17 14:31:59.108: INFO: stderr: ""
Feb 17 14:31:59.108: INFO: stdout: "deployment.apps/frontend created\n"
Feb 17 14:31:59.109: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb 17 14:31:59.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6850'
Feb 17 14:31:59.657: INFO: stderr: ""
Feb 17 14:31:59.657: INFO: stdout: "deployment.apps/redis-master created\n"
Feb 17 14:31:59.658: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Feb 17 14:31:59.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6850'
Feb 17 14:32:01.276: INFO: stderr: ""
Feb 17 14:32:01.276: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Feb 17 14:32:01.277: INFO: Waiting for all frontend pods to be Running.
Feb 17 14:32:26.328: INFO: Waiting for frontend to serve content.
Feb 17 14:32:27.942: INFO: Trying to add a new entry to the guestbook.
Feb 17 14:32:27.983: INFO: Verifying that added entry can be retrieved.
Feb 17 14:32:28.015: INFO: Failed to get response from guestbook. err: , response: {"data": ""}
STEP: using delete to clean up resources
Feb 17 14:32:33.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6850'
Feb 17 14:32:33.407: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 17 14:32:33.407: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Feb 17 14:32:33.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6850'
Feb 17 14:32:33.721: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 17 14:32:33.721: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb 17 14:32:33.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6850'
Feb 17 14:32:33.958: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 17 14:32:33.958: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb 17 14:32:33.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6850'
Feb 17 14:32:34.087: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 17 14:32:34.087: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb 17 14:32:34.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6850'
Feb 17 14:32:34.253: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 17 14:32:34.254: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb 17 14:32:34.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6850'
Feb 17 14:32:34.510: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 17 14:32:34.510: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:32:34.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6850" for this suite.
Feb 17 14:33:20.622: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:33:20.758: INFO: namespace kubectl-6850 deletion completed in 46.234484968s

• [SLOW TEST:83.721 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:33:20.759: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Feb 17 14:33:20.885: INFO: namespace kubectl-2290
Feb 17 14:33:20.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2290'
Feb 17 14:33:21.398: INFO: stderr: ""
Feb 17 14:33:21.398: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb 17 14:33:22.411: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 14:33:22.411: INFO: Found 0 / 1
Feb 17 14:33:23.409: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 14:33:23.409: INFO: Found 0 / 1
Feb 17 14:33:24.409: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 14:33:24.409: INFO: Found 0 / 1
Feb 17 14:33:25.408: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 14:33:25.408: INFO: Found 0 / 1
Feb 17 14:33:26.411: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 14:33:26.411: INFO: Found 0 / 1
Feb 17 14:33:27.409: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 14:33:27.409: INFO: Found 0 / 1
Feb 17 14:33:28.408: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 14:33:28.408: INFO: Found 0 / 1
Feb 17 14:33:29.408: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 14:33:29.408: INFO: Found 0 / 1
Feb 17 14:33:30.407: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 14:33:30.407: INFO: Found 1 / 1
Feb 17 14:33:30.407: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 17 14:33:30.411: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 14:33:30.411: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 17 14:33:30.411: INFO: wait on redis-master startup in kubectl-2290 
Feb 17 14:33:30.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-pnm5w redis-master --namespace=kubectl-2290'
Feb 17 14:33:30.611: INFO: stderr: ""
Feb 17 14:33:30.611: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 17 Feb 14:33:29.879 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 17 Feb 14:33:29.879 # Server started, Redis version 3.2.12\n1:M 17 Feb 14:33:29.879 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 17 Feb 14:33:29.879 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Feb 17 14:33:30.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-2290'
Feb 17 14:33:30.960: INFO: stderr: ""
Feb 17 14:33:30.960: INFO: stdout: "service/rm2 exposed\n"
Feb 17 14:33:30.972: INFO: Service rm2 in namespace kubectl-2290 found.
STEP: exposing service
Feb 17 14:33:33.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-2290'
Feb 17 14:33:33.372: INFO: stderr: ""
Feb 17 14:33:33.373: INFO: stdout: "service/rm3 exposed\n"
Feb 17 14:33:33.382: INFO: Service rm3 in namespace kubectl-2290 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:33:35.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2290" for this suite.
Feb 17 14:33:57.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:33:57.607: INFO: namespace kubectl-2290 deletion completed in 22.201140136s

• [SLOW TEST:36.848 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:33:57.609: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0217 14:34:28.345669       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 17 14:34:28.345: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:34:28.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6979" for this suite.
Feb 17 14:34:38.382: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:34:38.526: INFO: namespace gc-6979 deletion completed in 10.175711511s

• [SLOW TEST:40.918 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:34:38.527: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-7584
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Feb 17 14:34:38.902: INFO: Found 0 stateful pods, waiting for 3
Feb 17 14:34:48.915: INFO: Found 2 stateful pods, waiting for 3
Feb 17 14:34:58.913: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 17 14:34:58.913: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 17 14:34:58.913: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 17 14:35:08.914: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 17 14:35:08.914: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 17 14:35:08.914: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb 17 14:35:08.960: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Feb 17 14:35:19.170: INFO: Updating stateful set ss2
Feb 17 14:35:19.268: INFO: Waiting for Pod statefulset-7584/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Feb 17 14:35:29.764: INFO: Found 2 stateful pods, waiting for 3
Feb 17 14:35:39.773: INFO: Found 2 stateful pods, waiting for 3
Feb 17 14:35:49.819: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 17 14:35:49.819: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 17 14:35:49.819: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 17 14:35:59.825: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 17 14:35:59.825: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 17 14:35:59.825: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Feb 17 14:35:59.897: INFO: Updating stateful set ss2
Feb 17 14:35:59.923: INFO: Waiting for Pod statefulset-7584/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 17 14:36:10.971: INFO: Updating stateful set ss2
Feb 17 14:36:10.999: INFO: Waiting for StatefulSet statefulset-7584/ss2 to complete update
Feb 17 14:36:10.999: INFO: Waiting for Pod statefulset-7584/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 17 14:36:21.013: INFO: Waiting for StatefulSet statefulset-7584/ss2 to complete update
Feb 17 14:36:21.013: INFO: Waiting for Pod statefulset-7584/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 17 14:36:31.019: INFO: Waiting for StatefulSet statefulset-7584/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb 17 14:36:41.019: INFO: Deleting all statefulset in ns statefulset-7584
Feb 17 14:36:41.023: INFO: Scaling statefulset ss2 to 0
Feb 17 14:37:21.082: INFO: Waiting for statefulset status.replicas updated to 0
Feb 17 14:37:21.090: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:37:21.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7584" for this suite.
Feb 17 14:37:29.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:37:29.331: INFO: namespace statefulset-7584 deletion completed in 8.159484618s

• [SLOW TEST:170.805 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:37:29.332: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:37:39.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-3715" for this suite.
Feb 17 14:37:46.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:37:46.090: INFO: namespace emptydir-wrapper-3715 deletion completed in 6.138539325s

• [SLOW TEST:16.758 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:37:46.091: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-2781
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 17 14:37:46.150: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 17 14:38:24.361: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-2781 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 17 14:38:24.361: INFO: >>> kubeConfig: /root/.kube/config
I0217 14:38:24.444893       8 log.go:172] (0xc0005764d0) (0xc0009f57c0) Create stream
I0217 14:38:24.445011       8 log.go:172] (0xc0005764d0) (0xc0009f57c0) Stream added, broadcasting: 1
I0217 14:38:24.456529       8 log.go:172] (0xc0005764d0) Reply frame received for 1
I0217 14:38:24.456585       8 log.go:172] (0xc0005764d0) (0xc0026545a0) Create stream
I0217 14:38:24.456599       8 log.go:172] (0xc0005764d0) (0xc0026545a0) Stream added, broadcasting: 3
I0217 14:38:24.458822       8 log.go:172] (0xc0005764d0) Reply frame received for 3
I0217 14:38:24.458868       8 log.go:172] (0xc0005764d0) (0xc000da2140) Create stream
I0217 14:38:24.458885       8 log.go:172] (0xc0005764d0) (0xc000da2140) Stream added, broadcasting: 5
I0217 14:38:24.461231       8 log.go:172] (0xc0005764d0) Reply frame received for 5
I0217 14:38:24.706381       8 log.go:172] (0xc0005764d0) Data frame received for 3
I0217 14:38:24.706435       8 log.go:172] (0xc0026545a0) (3) Data frame handling
I0217 14:38:24.706466       8 log.go:172] (0xc0026545a0) (3) Data frame sent
I0217 14:38:24.841904       8 log.go:172] (0xc0005764d0) (0xc0026545a0) Stream removed, broadcasting: 3
I0217 14:38:24.842045       8 log.go:172] (0xc0005764d0) Data frame received for 1
I0217 14:38:24.842086       8 log.go:172] (0xc0005764d0) (0xc000da2140) Stream removed, broadcasting: 5
I0217 14:38:24.842129       8 log.go:172] (0xc0009f57c0) (1) Data frame handling
I0217 14:38:24.842192       8 log.go:172] (0xc0009f57c0) (1) Data frame sent
I0217 14:38:24.842236       8 log.go:172] (0xc0005764d0) (0xc0009f57c0) Stream removed, broadcasting: 1
I0217 14:38:24.842297       8 log.go:172] (0xc0005764d0) Go away received
I0217 14:38:24.842467       8 log.go:172] (0xc0005764d0) (0xc0009f57c0) Stream removed, broadcasting: 1
I0217 14:38:24.842504       8 log.go:172] (0xc0005764d0) (0xc0026545a0) Stream removed, broadcasting: 3
I0217 14:38:24.842524       8 log.go:172] (0xc0005764d0) (0xc000da2140) Stream removed, broadcasting: 5
Feb 17 14:38:24.842: INFO: Waiting for endpoints: map[]
Feb 17 14:38:24.850: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-2781 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 17 14:38:24.850: INFO: >>> kubeConfig: /root/.kube/config
I0217 14:38:24.976929       8 log.go:172] (0xc000a94d10) (0xc000da23c0) Create stream
I0217 14:38:24.977211       8 log.go:172] (0xc000a94d10) (0xc000da23c0) Stream added, broadcasting: 1
I0217 14:38:24.996016       8 log.go:172] (0xc000a94d10) Reply frame received for 1
I0217 14:38:24.996177       8 log.go:172] (0xc000a94d10) (0xc002654960) Create stream
I0217 14:38:24.996204       8 log.go:172] (0xc000a94d10) (0xc002654960) Stream added, broadcasting: 3
I0217 14:38:25.000107       8 log.go:172] (0xc000a94d10) Reply frame received for 3
I0217 14:38:25.000296       8 log.go:172] (0xc000a94d10) (0xc001922000) Create stream
I0217 14:38:25.000338       8 log.go:172] (0xc000a94d10) (0xc001922000) Stream added, broadcasting: 5
I0217 14:38:25.007923       8 log.go:172] (0xc000a94d10) Reply frame received for 5
I0217 14:38:25.168455       8 log.go:172] (0xc000a94d10) Data frame received for 3
I0217 14:38:25.168529       8 log.go:172] (0xc002654960) (3) Data frame handling
I0217 14:38:25.168556       8 log.go:172] (0xc002654960) (3) Data frame sent
I0217 14:38:25.283563       8 log.go:172] (0xc000a94d10) Data frame received for 1
I0217 14:38:25.283876       8 log.go:172] (0xc000a94d10) (0xc001922000) Stream removed, broadcasting: 5
I0217 14:38:25.283947       8 log.go:172] (0xc000da23c0) (1) Data frame handling
I0217 14:38:25.283974       8 log.go:172] (0xc000da23c0) (1) Data frame sent
I0217 14:38:25.284078       8 log.go:172] (0xc000a94d10) (0xc000da23c0) Stream removed, broadcasting: 1
I0217 14:38:25.284218       8 log.go:172] (0xc000a94d10) (0xc002654960) Stream removed, broadcasting: 3
I0217 14:38:25.284254       8 log.go:172] (0xc000a94d10) Go away received
I0217 14:38:25.284296       8 log.go:172] (0xc000a94d10) (0xc000da23c0) Stream removed, broadcasting: 1
I0217 14:38:25.284319       8 log.go:172] (0xc000a94d10) (0xc002654960) Stream removed, broadcasting: 3
I0217 14:38:25.284333       8 log.go:172] (0xc000a94d10) (0xc001922000) Stream removed, broadcasting: 5
Feb 17 14:38:25.284: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:38:25.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-2781" for this suite.
Feb 17 14:38:49.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:38:49.476: INFO: namespace pod-network-test-2781 deletion completed in 24.18432915s

• [SLOW TEST:63.385 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:38:49.476: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 17 14:38:49.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Feb 17 14:38:49.792: INFO: stderr: ""
Feb 17 14:38:49.792: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:38:49.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6936" for this suite.
Feb 17 14:38:55.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:38:55.972: INFO: namespace kubectl-6936 deletion completed in 6.169216491s

• [SLOW TEST:6.496 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:38:55.973: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-5545c598-e888-4d18-a0bf-1ab727aa2cf2
STEP: Creating a pod to test consume secrets
Feb 17 14:38:56.114: INFO: Waiting up to 5m0s for pod "pod-secrets-dee7de71-3355-4ef4-b4bb-e8b5b772ea4d" in namespace "secrets-4833" to be "success or failure"
Feb 17 14:38:56.180: INFO: Pod "pod-secrets-dee7de71-3355-4ef4-b4bb-e8b5b772ea4d": Phase="Pending", Reason="", readiness=false. Elapsed: 65.698104ms
Feb 17 14:38:58.186: INFO: Pod "pod-secrets-dee7de71-3355-4ef4-b4bb-e8b5b772ea4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072031982s
Feb 17 14:39:00.209: INFO: Pod "pod-secrets-dee7de71-3355-4ef4-b4bb-e8b5b772ea4d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094768449s
Feb 17 14:39:02.239: INFO: Pod "pod-secrets-dee7de71-3355-4ef4-b4bb-e8b5b772ea4d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.125315812s
Feb 17 14:39:04.262: INFO: Pod "pod-secrets-dee7de71-3355-4ef4-b4bb-e8b5b772ea4d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.148593719s
Feb 17 14:39:06.301: INFO: Pod "pod-secrets-dee7de71-3355-4ef4-b4bb-e8b5b772ea4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.187314552s
STEP: Saw pod success
Feb 17 14:39:06.301: INFO: Pod "pod-secrets-dee7de71-3355-4ef4-b4bb-e8b5b772ea4d" satisfied condition "success or failure"
Feb 17 14:39:06.350: INFO: Trying to get logs from node iruya-node pod pod-secrets-dee7de71-3355-4ef4-b4bb-e8b5b772ea4d container secret-volume-test: 
STEP: delete the pod
Feb 17 14:39:06.490: INFO: Waiting for pod pod-secrets-dee7de71-3355-4ef4-b4bb-e8b5b772ea4d to disappear
Feb 17 14:39:06.505: INFO: Pod pod-secrets-dee7de71-3355-4ef4-b4bb-e8b5b772ea4d no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:39:06.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4833" for this suite.
Feb 17 14:39:12.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:39:12.754: INFO: namespace secrets-4833 deletion completed in 6.240877369s

• [SLOW TEST:16.782 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:39:12.755: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-f581e1c1-b698-41c1-8330-6ad3030aee5d
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:39:12.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6861" for this suite.
Feb 17 14:39:18.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:39:19.073: INFO: namespace configmap-6861 deletion completed in 6.204458546s

• [SLOW TEST:6.318 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:39:19.073: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 17 14:39:19.150: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fe821367-2816-402e-be94-5b29b4629cde" in namespace "projected-21" to be "success or failure"
Feb 17 14:39:19.159: INFO: Pod "downwardapi-volume-fe821367-2816-402e-be94-5b29b4629cde": Phase="Pending", Reason="", readiness=false. Elapsed: 9.13811ms
Feb 17 14:39:21.171: INFO: Pod "downwardapi-volume-fe821367-2816-402e-be94-5b29b4629cde": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020659834s
Feb 17 14:39:23.185: INFO: Pod "downwardapi-volume-fe821367-2816-402e-be94-5b29b4629cde": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035066622s
Feb 17 14:39:25.194: INFO: Pod "downwardapi-volume-fe821367-2816-402e-be94-5b29b4629cde": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043983374s
Feb 17 14:39:27.202: INFO: Pod "downwardapi-volume-fe821367-2816-402e-be94-5b29b4629cde": Phase="Running", Reason="", readiness=true. Elapsed: 8.051925049s
Feb 17 14:39:29.212: INFO: Pod "downwardapi-volume-fe821367-2816-402e-be94-5b29b4629cde": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.06138921s
STEP: Saw pod success
Feb 17 14:39:29.212: INFO: Pod "downwardapi-volume-fe821367-2816-402e-be94-5b29b4629cde" satisfied condition "success or failure"
Feb 17 14:39:29.217: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-fe821367-2816-402e-be94-5b29b4629cde container client-container: 
STEP: delete the pod
Feb 17 14:39:29.414: INFO: Waiting for pod downwardapi-volume-fe821367-2816-402e-be94-5b29b4629cde to disappear
Feb 17 14:39:29.439: INFO: Pod downwardapi-volume-fe821367-2816-402e-be94-5b29b4629cde no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:39:29.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-21" for this suite.
Feb 17 14:39:35.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:39:35.615: INFO: namespace projected-21 deletion completed in 6.168440498s

• [SLOW TEST:16.542 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:39:35.615: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 17 14:39:35.728: INFO: Waiting up to 5m0s for pod "pod-b0d8b950-8230-4fbf-8015-c69f3d14b554" in namespace "emptydir-3700" to be "success or failure"
Feb 17 14:39:35.754: INFO: Pod "pod-b0d8b950-8230-4fbf-8015-c69f3d14b554": Phase="Pending", Reason="", readiness=false. Elapsed: 26.76921ms
Feb 17 14:39:37.765: INFO: Pod "pod-b0d8b950-8230-4fbf-8015-c69f3d14b554": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037702473s
Feb 17 14:39:39.809: INFO: Pod "pod-b0d8b950-8230-4fbf-8015-c69f3d14b554": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081853581s
Feb 17 14:39:41.824: INFO: Pod "pod-b0d8b950-8230-4fbf-8015-c69f3d14b554": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096215164s
Feb 17 14:39:43.837: INFO: Pod "pod-b0d8b950-8230-4fbf-8015-c69f3d14b554": Phase="Pending", Reason="", readiness=false. Elapsed: 8.109154615s
Feb 17 14:39:45.850: INFO: Pod "pod-b0d8b950-8230-4fbf-8015-c69f3d14b554": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.122694169s
STEP: Saw pod success
Feb 17 14:39:45.850: INFO: Pod "pod-b0d8b950-8230-4fbf-8015-c69f3d14b554" satisfied condition "success or failure"
Feb 17 14:39:45.857: INFO: Trying to get logs from node iruya-node pod pod-b0d8b950-8230-4fbf-8015-c69f3d14b554 container test-container: 
STEP: delete the pod
Feb 17 14:39:45.954: INFO: Waiting for pod pod-b0d8b950-8230-4fbf-8015-c69f3d14b554 to disappear
Feb 17 14:39:45.961: INFO: Pod pod-b0d8b950-8230-4fbf-8015-c69f3d14b554 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:39:45.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3700" for this suite.
Feb 17 14:39:51.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:39:52.120: INFO: namespace emptydir-3700 deletion completed in 6.153016486s

• [SLOW TEST:16.505 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:39:52.120: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:39:57.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3723" for this suite.
Feb 17 14:40:03.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:40:03.950: INFO: namespace watch-3723 deletion completed in 6.268628763s

• [SLOW TEST:11.830 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:40:03.951: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-f6da1935-fd73-4a68-b6f4-f33ffd7e449b
Feb 17 14:40:04.253: INFO: Pod name my-hostname-basic-f6da1935-fd73-4a68-b6f4-f33ffd7e449b: Found 0 pods out of 1
Feb 17 14:40:09.259: INFO: Pod name my-hostname-basic-f6da1935-fd73-4a68-b6f4-f33ffd7e449b: Found 1 pods out of 1
Feb 17 14:40:09.259: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-f6da1935-fd73-4a68-b6f4-f33ffd7e449b" are running
Feb 17 14:40:15.277: INFO: Pod "my-hostname-basic-f6da1935-fd73-4a68-b6f4-f33ffd7e449b-6t9tz" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-17 14:40:04 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-17 14:40:04 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-f6da1935-fd73-4a68-b6f4-f33ffd7e449b]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-17 14:40:04 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-f6da1935-fd73-4a68-b6f4-f33ffd7e449b]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-17 14:40:04 +0000 UTC Reason: Message:}])
Feb 17 14:40:15.277: INFO: Trying to dial the pod
Feb 17 14:40:20.326: INFO: Controller my-hostname-basic-f6da1935-fd73-4a68-b6f4-f33ffd7e449b: Got expected result from replica 1 [my-hostname-basic-f6da1935-fd73-4a68-b6f4-f33ffd7e449b-6t9tz]: "my-hostname-basic-f6da1935-fd73-4a68-b6f4-f33ffd7e449b-6t9tz", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:40:20.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3386" for this suite.
Feb 17 14:40:26.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:40:26.498: INFO: namespace replication-controller-3386 deletion completed in 6.166128934s

• [SLOW TEST:22.547 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:40:26.498: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Feb 17 14:40:36.652: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Feb 17 14:40:46.801: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:40:46.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2661" for this suite.
Feb 17 14:40:52.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:40:52.995: INFO: namespace pods-2661 deletion completed in 6.186376453s

• [SLOW TEST:26.497 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:40:52.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-zhzdv in namespace proxy-185
I0217 14:40:53.326901       8 runners.go:180] Created replication controller with name: proxy-service-zhzdv, namespace: proxy-185, replica count: 1
I0217 14:40:54.377972       8 runners.go:180] proxy-service-zhzdv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0217 14:40:55.378275       8 runners.go:180] proxy-service-zhzdv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0217 14:40:56.378629       8 runners.go:180] proxy-service-zhzdv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0217 14:40:57.379017       8 runners.go:180] proxy-service-zhzdv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0217 14:40:58.379382       8 runners.go:180] proxy-service-zhzdv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0217 14:40:59.379763       8 runners.go:180] proxy-service-zhzdv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0217 14:41:00.380254       8 runners.go:180] proxy-service-zhzdv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0217 14:41:01.380515       8 runners.go:180] proxy-service-zhzdv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0217 14:41:02.380782       8 runners.go:180] proxy-service-zhzdv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0217 14:41:03.381205       8 runners.go:180] proxy-service-zhzdv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0217 14:41:04.381474       8 runners.go:180] proxy-service-zhzdv Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 17 14:41:04.388: INFO: setup took 11.324451173s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Feb 17 14:41:04.432: INFO: (0) /api/v1/namespaces/proxy-185/pods/http:proxy-service-zhzdv-wk6w6:162/proxy/: bar (200; 43.541673ms)
Feb 17 14:41:04.432: INFO: (0) /api/v1/namespaces/proxy-185/services/http:proxy-service-zhzdv:portname2/proxy/: bar (200; 44.167593ms)
Feb 17 14:41:04.433: INFO: (0) /api/v1/namespaces/proxy-185/pods/proxy-service-zhzdv-wk6w6:162/proxy/: bar (200; 44.180979ms)
Feb 17 14:41:04.433: INFO: (0) /api/v1/namespaces/proxy-185/pods/proxy-service-zhzdv-wk6w6:1080/proxy/: testtest (200; 44.372204ms)
Feb 17 14:41:04.433: INFO: (0) /api/v1/namespaces/proxy-185/services/proxy-service-zhzdv:portname2/proxy/: bar (200; 44.446137ms)
Feb 17 14:41:04.433: INFO: (0) /api/v1/namespaces/proxy-185/pods/http:proxy-service-zhzdv-wk6w6:160/proxy/: foo (200; 44.401972ms)
Feb 17 14:41:04.440: INFO: (0) /api/v1/namespaces/proxy-185/services/proxy-service-zhzdv:portname1/proxy/: foo (200; 51.443696ms)
Feb 17 14:41:04.440: INFO: (0) /api/v1/namespaces/proxy-185/services/http:proxy-service-zhzdv:portname1/proxy/: foo (200; 51.425402ms)
Feb 17 14:41:04.440: INFO: (0) /api/v1/namespaces/proxy-185/pods/http:proxy-service-zhzdv-wk6w6:1080/proxy/: t... (200; 51.647745ms)
Feb 17 14:41:04.448: INFO: (0) /api/v1/namespaces/proxy-185/pods/https:proxy-service-zhzdv-wk6w6:443/proxy/: test (200; 15.308945ms)
Feb 17 14:41:04.467: INFO: (1) /api/v1/namespaces/proxy-185/services/proxy-service-zhzdv:portname2/proxy/: bar (200; 15.520635ms)
Feb 17 14:41:04.467: INFO: (1) /api/v1/namespaces/proxy-185/services/https:proxy-service-zhzdv:tlsportname2/proxy/: tls qux (200; 15.45623ms)
Feb 17 14:41:04.467: INFO: (1) /api/v1/namespaces/proxy-185/pods/https:proxy-service-zhzdv-wk6w6:462/proxy/: tls qux (200; 15.475794ms)
Feb 17 14:41:04.468: INFO: (1) /api/v1/namespaces/proxy-185/services/http:proxy-service-zhzdv:portname1/proxy/: foo (200; 16.140111ms)
Feb 17 14:41:04.468: INFO: (1) /api/v1/namespaces/proxy-185/pods/http:proxy-service-zhzdv-wk6w6:160/proxy/: foo (200; 16.428861ms)
Feb 17 14:41:04.468: INFO: (1) /api/v1/namespaces/proxy-185/pods/http:proxy-service-zhzdv-wk6w6:1080/proxy/: t... (200; 16.621846ms)
Feb 17 14:41:04.468: INFO: (1) /api/v1/namespaces/proxy-185/pods/proxy-service-zhzdv-wk6w6:160/proxy/: foo (200; 16.76173ms)
Feb 17 14:41:04.468: INFO: (1) /api/v1/namespaces/proxy-185/pods/proxy-service-zhzdv-wk6w6:1080/proxy/: testtest (200; 11.940326ms)
Feb 17 14:41:04.485: INFO: (2) /api/v1/namespaces/proxy-185/services/proxy-service-zhzdv:portname2/proxy/: bar (200; 15.621947ms)
Feb 17 14:41:04.485: INFO: (2) /api/v1/namespaces/proxy-185/pods/https:proxy-service-zhzdv-wk6w6:460/proxy/: tls baz (200; 15.508402ms)
Feb 17 14:41:04.485: INFO: (2) /api/v1/namespaces/proxy-185/services/https:proxy-service-zhzdv:tlsportname1/proxy/: tls baz (200; 15.565867ms)
Feb 17 14:41:04.485: INFO: (2) /api/v1/namespaces/proxy-185/pods/http:proxy-service-zhzdv-wk6w6:1080/proxy/: t... (200; 15.57907ms)
Feb 17 14:41:04.485: INFO: (2) /api/v1/namespaces/proxy-185/pods/proxy-service-zhzdv-wk6w6:1080/proxy/: testt... (200; 11.987343ms)
Feb 17 14:41:04.500: INFO: (3) /api/v1/namespaces/proxy-185/pods/proxy-service-zhzdv-wk6w6:160/proxy/: foo (200; 12.103707ms)
Feb 17 14:41:04.500: INFO: (3) /api/v1/namespaces/proxy-185/pods/proxy-service-zhzdv-wk6w6:1080/proxy/: testtest (200; 12.742506ms)
Feb 17 14:41:04.501: INFO: (3) /api/v1/namespaces/proxy-185/pods/http:proxy-service-zhzdv-wk6w6:162/proxy/: bar (200; 12.812493ms)
Feb 17 14:41:04.501: INFO: (3) /api/v1/namespaces/proxy-185/services/proxy-service-zhzdv:portname1/proxy/: foo (200; 12.935993ms)
Feb 17 14:41:04.501: INFO: (3) /api/v1/namespaces/proxy-185/services/http:proxy-service-zhzdv:portname1/proxy/: foo (200; 13.478498ms)
Feb 17 14:41:04.501: INFO: (3) /api/v1/namespaces/proxy-185/services/https:proxy-service-zhzdv:tlsportname2/proxy/: tls qux (200; 13.606921ms)
Feb 17 14:41:04.502: INFO: (3) /api/v1/namespaces/proxy-185/services/proxy-service-zhzdv:portname2/proxy/: bar (200; 14.386446ms)
Feb 17 14:41:04.502: INFO: (3) /api/v1/namespaces/proxy-185/services/http:proxy-service-zhzdv:portname2/proxy/: bar (200; 14.668262ms)
Feb 17 14:41:04.511: INFO: (3) /api/v1/namespaces/proxy-185/services/https:proxy-service-zhzdv:tlsportname1/proxy/: tls baz (200; 23.827182ms)
Feb 17 14:41:04.525: INFO: (4) /api/v1/namespaces/proxy-185/pods/proxy-service-zhzdv-wk6w6:162/proxy/: bar (200; 13.20676ms)
Feb 17 14:41:04.529: INFO: (4) /api/v1/namespaces/proxy-185/pods/proxy-service-zhzdv-wk6w6/proxy/: test (200; 17.046728ms)
Feb 17 14:41:04.529: INFO: (4) /api/v1/namespaces/proxy-185/pods/http:proxy-service-zhzdv-wk6w6:1080/proxy/: t... (200; 16.72022ms)
Feb 17 14:41:04.529: INFO: (4) /api/v1/namespaces/proxy-185/pods/http:proxy-service-zhzdv-wk6w6:160/proxy/: foo (200; 17.646767ms)
Feb 17 14:41:04.529: INFO: (4) /api/v1/namespaces/proxy-185/pods/proxy-service-zhzdv-wk6w6:1080/proxy/: testtest (200; 16.459473ms)
Feb 17 14:41:04.551: INFO: (5) /api/v1/namespaces/proxy-185/pods/proxy-service-zhzdv-wk6w6:1080/proxy/: testt... (200; 16.576116ms)
Feb 17 14:41:04.551: INFO: (5) /api/v1/namespaces/proxy-185/services/https:proxy-service-zhzdv:tlsportname2/proxy/: tls qux (200; 17.358169ms)
Feb 17 14:41:04.552: INFO: (5) /api/v1/namespaces/proxy-185/services/proxy-service-zhzdv:portname1/proxy/: foo (200; 17.510642ms)
Feb 17 14:41:04.552: INFO: (5) /api/v1/namespaces/proxy-185/services/https:proxy-service-zhzdv:tlsportname1/proxy/: tls baz (200; 17.522174ms)
Feb 17 14:41:04.552: INFO: (5) /api/v1/namespaces/proxy-185/pods/proxy-service-zhzdv-wk6w6:160/proxy/: foo (200; 17.651466ms)
Feb 17 14:41:04.553: INFO: (5) /api/v1/namespaces/proxy-185/services/proxy-service-zhzdv:portname2/proxy/: bar (200; 19.155927ms)
Feb 17 14:41:04.554: INFO: (5) /api/v1/namespaces/proxy-185/services/http:proxy-service-zhzdv:portname1/proxy/: foo (200; 19.336253ms)
Feb 17 14:41:04.554: INFO: (5) /api/v1/namespaces/proxy-185/pods/http:proxy-service-zhzdv-wk6w6:162/proxy/: bar (200; 19.444251ms)
Feb 17 14:41:04.554: INFO: (5) /api/v1/namespaces/proxy-185/services/http:proxy-service-zhzdv:portname2/proxy/: bar (200; 20.1723ms)
Feb 17 14:41:04.561: INFO: (6) /api/v1/namespaces/proxy-185/pods/proxy-service-zhzdv-wk6w6:1080/proxy/: testt... (200; 7.135668ms)
Feb 17 14:41:04.562: INFO: (6) /api/v1/namespaces/proxy-185/pods/https:proxy-service-zhzdv-wk6w6:460/proxy/: tls baz (200; 7.037975ms)
Feb 17 14:41:04.562: INFO: (6) /api/v1/namespaces/proxy-185/pods/proxy-service-zhzdv-wk6w6/proxy/: test (200; 7.760995ms)
Feb 17 14:41:04.562: INFO: (6) /api/v1/namespaces/proxy-185/pods/http:proxy-service-zhzdv-wk6w6:160/proxy/: foo (200; 7.831172ms)
Feb 17 14:41:04.563: INFO: (6) /api/v1/namespaces/proxy-185/pods/proxy-service-zhzdv-wk6w6:160/proxy/: foo (200; 8.260002ms)
Feb 17 14:41:04.563: INFO: (6) /api/v1/namespaces/proxy-185/pods/http:proxy-service-zhzdv-wk6w6:162/proxy/: bar (200; 8.469814ms)
Feb 17 14:41:04.563: INFO: (6) /api/v1/namespaces/proxy-185/services/https:proxy-service-zhzdv:tlsportname2/proxy/: tls qux (200; 8.741992ms)
Feb 17 14:41:04.563: INFO: (6) /api/v1/namespaces/proxy-185/pods/proxy-service-zhzdv-wk6w6:162/proxy/: bar (200; 8.908183ms)
Feb 17 14:41:04.564: INFO: (6) /api/v1/namespaces/proxy-185/pods/https:proxy-service-zhzdv-wk6w6:443/proxy/: testt... (200; 25.722784ms)
Feb 17 14:41:04.596: INFO: (7) /api/v1/namespaces/proxy-185/pods/https:proxy-service-zhzdv-wk6w6:460/proxy/: tls baz (200; 25.950047ms)
Feb 17 14:41:04.597: INFO: (7) /api/v1/namespaces/proxy-185/services/https:proxy-service-zhzdv:tlsportname2/proxy/: tls qux (200; 26.98005ms)
Feb 17 14:41:04.597: INFO: (7) /api/v1/namespaces/proxy-185/pods/https:proxy-service-zhzdv-wk6w6:462/proxy/: tls qux (200; 27.172644ms)
Feb 17 14:41:04.597: INFO: (7) /api/v1/namespaces/proxy-185/pods/proxy-service-zhzdv-wk6w6:162/proxy/: bar (200; 27.148426ms)
Feb 17 14:41:04.597: INFO: (7) /api/v1/namespaces/proxy-185/pods/https:proxy-service-zhzdv-wk6w6:443/proxy/: test (200; 27.147282ms)
Feb 17 14:41:04.598: INFO: (7) /api/v1/namespaces/proxy-185/services/proxy-service-zhzdv:portname2/proxy/: bar (200; 28.722311ms)
Feb 17 14:41:04.600: INFO: (7) /api/v1/namespaces/proxy-185/services/http:proxy-service-zhzdv:portname2/proxy/: bar (200; 30.64469ms)
Feb 17 14:41:04.600: INFO: (7) /api/v1/namespaces/proxy-185/pods/proxy-service-zhzdv-wk6w6:160/proxy/: foo (200; 30.588937ms)
Feb 17 14:41:04.617: INFO: (8) /api/v1/namespaces/proxy-185/pods/proxy-service-zhzdv-wk6w6/proxy/: test (200; 16.744624ms)
Feb 17 14:41:04.618: INFO: (8) /api/v1/namespaces/proxy-185/pods/https:proxy-service-zhzdv-wk6w6:460/proxy/: tls baz (200; 17.07603ms)
Feb 17 14:41:04.618: INFO: (8) /api/v1/namespaces/proxy-185/pods/http:proxy-service-zhzdv-wk6w6:162/proxy/: bar (200; 17.225205ms)
Feb 17 14:41:04.619: INFO: (8) /api/v1/namespaces/proxy-185/pods/proxy-service-zhzdv-wk6w6:1080/proxy/: testt... (200; 18.336867ms)
Feb 17 14:41:04.619: INFO: (8) /api/v1/namespaces/proxy-185/pods/http:proxy-service-zhzdv-wk6w6:160/proxy/: foo (200; 18.18292ms)
Feb 17 14:41:04.619: INFO: (8) /api/v1/namespaces/proxy-185/pods/https:proxy-service-zhzdv-wk6w6:443/proxy/: t... (200; 7.467461ms)
Feb 17 14:41:04.630: INFO: (9) /api/v1/namespaces/proxy-185/pods/proxy-service-zhzdv-wk6w6:1080/proxy/: testtest (200; 13.02983ms)
Feb 17 14:41:04.638: INFO: (9) /api/v1/namespaces/proxy-185/services/https:proxy-service-zhzdv:tlsportname2/proxy/: tls qux (200; 16.36045ms)
Feb 17 14:41:04.638: INFO: (9) /api/v1/namespaces/proxy-185/services/proxy-service-zhzdv:portname2/proxy/: bar (200; 16.164495ms)
Feb 17 14:41:04.638: INFO: (9) /api/v1/namespaces/proxy-185/services/proxy-service-zhzdv:portname1/proxy/: foo (200; 16.754041ms)
Feb 17 14:41:04.639: INFO: (9) /api/v1/namespaces/proxy-185/services/http:proxy-service-zhzdv:portname2/proxy/: bar (200; 17.419854ms)
Feb 17 14:41:04.639: INFO: (9) /api/v1/namespaces/proxy-185/services/https:proxy-service-zhzdv:tlsportname1/proxy/: tls baz (200; 17.568433ms)
Feb 17 14:41:04.656: INFO: (9) /api/v1/namespaces/proxy-185/services/http:proxy-service-zhzdv:portname1/proxy/: foo (200; 34.589111ms)
Feb 17 14:41:04.666: INFO: (10) /api/v1/namespaces/proxy-185/pods/proxy-service-zhzdv-wk6w6:160/proxy/: foo (200; 9.205701ms)
Feb 17 14:41:04.666: INFO: (10) /api/v1/namespaces/proxy-185/pods/https:proxy-service-zhzdv-wk6w6:443/proxy/: testt... (200; 9.764772ms)
Feb 17 14:41:04.666: INFO: (10) /api/v1/namespaces/proxy-185/pods/http:proxy-service-zhzdv-wk6w6:162/proxy/: bar (200; 9.854802ms)
Feb 17 14:41:04.667: INFO: (10) /api/v1/namespaces/proxy-185/pods/https:proxy-service-zhzdv-wk6w6:460/proxy/: tls baz (200; 10.552794ms)
Feb 17 14:41:04.667: INFO: (10) /api/v1/namespaces/proxy-185/pods/proxy-service-zhzdv-wk6w6/proxy/: test (200; 10.698567ms)
Feb 17 14:41:04.667: INFO: (10) /api/v1/namespaces/proxy-185/services/http:proxy-service-zhzdv:portname2/proxy/: bar (200; 11.24276ms)
Feb 17 14:41:04.668: INFO: (10) /api/v1/namespaces/proxy-185/services/https:proxy-service-zhzdv:tlsportname1/proxy/: tls baz (200; 11.859142ms)
Feb 17 14:41:04.668: INFO: (10) /api/v1/namespaces/proxy-185/pods/https:proxy-service-zhzdv-wk6w6:462/proxy/: tls qux (200; 11.954801ms)
Feb 17 14:41:04.669: INFO: (10) /api/v1/namespaces/proxy-185/services/proxy-service-zhzdv:portname1/proxy/: foo (200; 12.256101ms)
Feb 17 14:41:04.669: INFO: (10) /api/v1/namespaces/proxy-185/services/proxy-service-zhzdv:portname2/proxy/: bar (200; 12.15784ms)
Feb 17 14:41:04.669: INFO: (10) /api/v1/namespaces/proxy-185/services/http:proxy-service-zhzdv:portname1/proxy/: foo (200; 12.349763ms)
Feb 17 14:41:04.669: INFO: (10) /api/v1/namespaces/proxy-185/services/https:proxy-service-zhzdv:tlsportname2/proxy/: tls qux (200; 12.823485ms)
Feb 17 14:41:04.677: INFO: (11) /api/v1/namespaces/proxy-185/pods/proxy-service-zhzdv-wk6w6/proxy/: test (200; 7.238163ms)
Feb 17 14:41:04.677: INFO: (11) /api/v1/namespaces/proxy-185/pods/proxy-service-zhzdv-wk6w6:162/proxy/: bar (200; 7.224985ms)
Feb 17 14:41:04.677: INFO: (11) /api/v1/namespaces/proxy-185/pods/http:proxy-service-zhzdv-wk6w6:162/proxy/: bar (200; 7.599254ms)
Feb 17 14:41:04.677: INFO: (11) /api/v1/namespaces/proxy-185/pods/https:proxy-service-zhzdv-wk6w6:460/proxy/: tls baz (200; 7.764354ms)
Feb 17 14:41:04.678: INFO: (11) /api/v1/namespaces/proxy-185/pods/proxy-service-zhzdv-wk6w6:160/proxy/: foo (200; 8.645155ms)
Feb 17 14:41:04.678: INFO: (11) /api/v1/namespaces/proxy-185/pods/https:proxy-service-zhzdv-wk6w6:443/proxy/: t... (200; 8.594535ms)
Feb 17 14:41:04.678: INFO: (11) /api/v1/namespaces/proxy-185/pods/https:proxy-service-zhzdv-wk6w6:462/proxy/: tls qux (200; 8.904815ms)
Feb 17 14:41:04.678: INFO: (11) /api/v1/namespaces/proxy-185/pods/proxy-service-zhzdv-wk6w6:1080/proxy/: testtest (200; 16.580655ms)
Feb 17 14:41:04.699: INFO: (12) /api/v1/namespaces/proxy-185/services/http:proxy-service-zhzdv:portname2/proxy/: bar (200; 16.75394ms)
Feb 17 14:41:04.700: INFO: (12) /api/v1/namespaces/proxy-185/pods/http:proxy-service-zhzdv-wk6w6:162/proxy/: bar (200; 17.638419ms)
Feb 17 14:41:04.700: INFO: (12) /api/v1/namespaces/proxy-185/pods/http:proxy-service-zhzdv-wk6w6:160/proxy/: foo (200; 17.612677ms)
Feb 17 14:41:04.700: INFO: (12) /api/v1/namespaces/proxy-185/pods/https:proxy-service-zhzdv-wk6w6:460/proxy/: tls baz (200; 18.011392ms)
Feb 17 14:41:04.700: INFO: (12) /api/v1/namespaces/proxy-185/services/https:proxy-service-zhzdv:tlsportname1/proxy/: tls baz (200; 17.950171ms)
Feb 17 14:41:04.700: INFO: (12) /api/v1/namespaces/proxy-185/pods/http:proxy-service-zhzdv-wk6w6:1080/proxy/: t... (200; 18.63956ms)
Feb 17 14:41:04.700: INFO: (12) /api/v1/namespaces/proxy-185/pods/https:proxy-service-zhzdv-wk6w6:462/proxy/: tls qux (200; 18.503584ms)
Feb 17 14:41:04.701: INFO: (12) /api/v1/namespaces/proxy-185/services/https:proxy-service-zhzdv:tlsportname2/proxy/: tls qux (200; 18.558493ms)
Feb 17 14:41:04.701: INFO: (12) /api/v1/namespaces/proxy-185/pods/proxy-service-zhzdv-wk6w6:1080/proxy/: testtest (200; 11.130703ms)
Feb 17 14:41:04.712: INFO: (13) /api/v1/namespaces/proxy-185/pods/http:proxy-service-zhzdv-wk6w6:1080/proxy/: t... (200; 11.053184ms)
Feb 17 14:41:04.713: INFO: (13) /api/v1/namespaces/proxy-185/pods/proxy-service-zhzdv-wk6w6:162/proxy/: bar (200; 11.681766ms)
Feb 17 14:41:04.713: INFO: (13) /api/v1/namespaces/proxy-185/pods/https:proxy-service-zhzdv-wk6w6:460/proxy/: tls baz (200; 12.482953ms)
Feb 17 14:41:04.714: INFO: (13) /api/v1/namespaces/proxy-185/pods/proxy-service-zhzdv-wk6w6:1080/proxy/: testtest (200; 16.634428ms)
Feb 17 14:41:04.734: INFO: (14) /api/v1/namespaces/proxy-185/pods/proxy-service-zhzdv-wk6w6:160/proxy/: foo (200; 16.713606ms)
Feb 17 14:41:04.734: INFO: (14) /api/v1/namespaces/proxy-185/services/proxy-service-zhzdv:portname2/proxy/: bar (200; 16.702092ms)
Feb 17 14:41:04.734: INFO: (14) /api/v1/namespaces/proxy-185/services/http:proxy-service-zhzdv:portname1/proxy/: foo (200; 16.551927ms)
Feb 17 14:41:04.734: INFO: (14) /api/v1/namespaces/proxy-185/services/http:proxy-service-zhzdv:portname2/proxy/: bar (200; 17.056103ms)
Feb 17 14:41:04.734: INFO: (14) /api/v1/namespaces/proxy-185/services/https:proxy-service-zhzdv:tlsportname1/proxy/: tls baz (200; 16.821066ms)
Feb 17 14:41:04.734: INFO: (14) /api/v1/namespaces/proxy-185/pods/https:proxy-service-zhzdv-wk6w6:460/proxy/: tls baz (200; 16.979571ms)
Feb 17 14:41:04.735: INFO: (14) /api/v1/namespaces/proxy-185/pods/proxy-service-zhzdv-wk6w6:1080/proxy/: testt... (200; 19.900627ms)
Feb 17 14:41:04.737: INFO: (14) /api/v1/namespaces/proxy-185/pods/http:proxy-service-zhzdv-wk6w6:160/proxy/: foo (200; 20.056466ms)
Feb 17 14:41:04.738: INFO: (14) /api/v1/namespaces/proxy-185/services/proxy-service-zhzdv:portname1/proxy/: foo (200; 20.561507ms)
Feb 17 14:41:04.747: INFO: (15) /api/v1/namespaces/proxy-185/pods/https:proxy-service-zhzdv-wk6w6:443/proxy/: t... (200; 9.79136ms)
Feb 17 14:41:04.748: INFO: (15) /api/v1/namespaces/proxy-185/pods/http:proxy-service-zhzdv-wk6w6:160/proxy/: foo (200; 9.368628ms)
Feb 17 14:41:04.748: INFO: (15) /api/v1/namespaces/proxy-185/pods/proxy-service-zhzdv-wk6w6:1080/proxy/: testtest (200; 9.994525ms)
Feb 17 14:41:04.749: INFO: (15) /api/v1/namespaces/proxy-185/pods/https:proxy-service-zhzdv-wk6w6:460/proxy/: tls baz (200; 9.756292ms)
Feb 17 14:41:04.749: INFO: (15) /api/v1/namespaces/proxy-185/pods/proxy-service-zhzdv-wk6w6:160/proxy/: foo (200; 10.882911ms)
Feb 17 14:41:04.756: INFO: (15) /api/v1/namespaces/proxy-185/services/https:proxy-service-zhzdv:tlsportname1/proxy/: tls baz (200; 18.305652ms)
Feb 17 14:41:04.756: INFO: (15) /api/v1/namespaces/proxy-185/services/http:proxy-service-zhzdv:portname2/proxy/: bar (200; 18.183852ms)
Feb 17 14:41:04.757: INFO: (15) /api/v1/namespaces/proxy-185/services/http:proxy-service-zhzdv:portname1/proxy/: foo (200; 18.182757ms)
Feb 17 14:41:04.757: INFO: (15) /api/v1/namespaces/proxy-185/services/proxy-service-zhzdv:portname1/proxy/: foo (200; 18.213877ms)
Feb 17 14:41:04.758: INFO: (15) /api/v1/namespaces/proxy-185/services/proxy-service-zhzdv:portname2/proxy/: bar (200; 19.595225ms)
Feb 17 14:41:04.759: INFO: (15) /api/v1/namespaces/proxy-185/services/https:proxy-service-zhzdv:tlsportname2/proxy/: tls qux (200; 20.6015ms)
Feb 17 14:41:04.779: INFO: (16) /api/v1/namespaces/proxy-185/pods/http:proxy-service-zhzdv-wk6w6:1080/proxy/: t... (200; 20.095956ms)
Feb 17 14:41:04.781: INFO: (16) /api/v1/namespaces/proxy-185/services/http:proxy-service-zhzdv:portname2/proxy/: bar (200; 21.735631ms)
Feb 17 14:41:04.781: INFO: (16) /api/v1/namespaces/proxy-185/services/http:proxy-service-zhzdv:portname1/proxy/: foo (200; 21.858551ms)
Feb 17 14:41:04.781: INFO: (16) /api/v1/namespaces/proxy-185/pods/proxy-service-zhzdv-wk6w6/proxy/: test (200; 21.962248ms)
Feb 17 14:41:04.781: INFO: (16) /api/v1/namespaces/proxy-185/pods/http:proxy-service-zhzdv-wk6w6:160/proxy/: foo (200; 21.860597ms)
Feb 17 14:41:04.781: INFO: (16) /api/v1/namespaces/proxy-185/services/proxy-service-zhzdv:portname2/proxy/: bar (200; 21.885397ms)
Feb 17 14:41:04.781: INFO: (16) /api/v1/namespaces/proxy-185/pods/https:proxy-service-zhzdv-wk6w6:460/proxy/: tls baz (200; 21.999121ms)
Feb 17 14:41:04.782: INFO: (16) /api/v1/namespaces/proxy-185/pods/https:proxy-service-zhzdv-wk6w6:462/proxy/: tls qux (200; 22.685999ms)
Feb 17 14:41:04.782: INFO: (16) /api/v1/namespaces/proxy-185/pods/proxy-service-zhzdv-wk6w6:1080/proxy/: testtesttest (200; 10.592332ms)
Feb 17 14:41:04.794: INFO: (17) /api/v1/namespaces/proxy-185/pods/proxy-service-zhzdv-wk6w6:160/proxy/: foo (200; 10.62537ms)
Feb 17 14:41:04.794: INFO: (17) /api/v1/namespaces/proxy-185/pods/https:proxy-service-zhzdv-wk6w6:460/proxy/: tls baz (200; 10.930911ms)
Feb 17 14:41:04.795: INFO: (17) /api/v1/namespaces/proxy-185/pods/http:proxy-service-zhzdv-wk6w6:1080/proxy/: t... (200; 10.970713ms)
Feb 17 14:41:04.795: INFO: (17) /api/v1/namespaces/proxy-185/pods/https:proxy-service-zhzdv-wk6w6:443/proxy/: test (200; 16.023772ms)
Feb 17 14:41:04.814: INFO: (18) /api/v1/namespaces/proxy-185/pods/http:proxy-service-zhzdv-wk6w6:1080/proxy/: t... (200; 16.193458ms)
Feb 17 14:41:04.817: INFO: (18) /api/v1/namespaces/proxy-185/services/http:proxy-service-zhzdv:portname2/proxy/: bar (200; 18.675801ms)
Feb 17 14:41:04.817: INFO: (18) /api/v1/namespaces/proxy-185/services/proxy-service-zhzdv:portname1/proxy/: foo (200; 18.770982ms)
Feb 17 14:41:04.817: INFO: (18) /api/v1/namespaces/proxy-185/services/http:proxy-service-zhzdv:portname1/proxy/: foo (200; 18.677158ms)
Feb 17 14:41:04.817: INFO: (18) /api/v1/namespaces/proxy-185/services/https:proxy-service-zhzdv:tlsportname1/proxy/: tls baz (200; 19.004555ms)
Feb 17 14:41:04.817: INFO: (18) /api/v1/namespaces/proxy-185/pods/proxy-service-zhzdv-wk6w6:1080/proxy/: testtest (200; 11.945121ms)
Feb 17 14:41:04.830: INFO: (19) /api/v1/namespaces/proxy-185/pods/proxy-service-zhzdv-wk6w6:162/proxy/: bar (200; 11.887739ms)
Feb 17 14:41:04.832: INFO: (19) /api/v1/namespaces/proxy-185/services/http:proxy-service-zhzdv:portname2/proxy/: bar (200; 14.147339ms)
Feb 17 14:41:04.832: INFO: (19) /api/v1/namespaces/proxy-185/services/proxy-service-zhzdv:portname1/proxy/: foo (200; 14.140647ms)
Feb 17 14:41:04.833: INFO: (19) /api/v1/namespaces/proxy-185/pods/http:proxy-service-zhzdv-wk6w6:160/proxy/: foo (200; 14.59053ms)
Feb 17 14:41:04.833: INFO: (19) /api/v1/namespaces/proxy-185/services/https:proxy-service-zhzdv:tlsportname1/proxy/: tls baz (200; 14.956624ms)
Feb 17 14:41:04.833: INFO: (19) /api/v1/namespaces/proxy-185/pods/proxy-service-zhzdv-wk6w6:1080/proxy/: testt... (200; 15.62776ms)
Feb 17 14:41:04.834: INFO: (19) /api/v1/namespaces/proxy-185/pods/https:proxy-service-zhzdv-wk6w6:443/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Feb 17 14:41:31.646: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5199 pod-service-account-656ba827-f103-4db2-afaa-87b667683981 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Feb 17 14:41:34.510: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5199 pod-service-account-656ba827-f103-4db2-afaa-87b667683981 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Feb 17 14:41:35.137: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5199 pod-service-account-656ba827-f103-4db2-afaa-87b667683981 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:41:35.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-5199" for this suite.
Feb 17 14:41:41.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:41:41.616: INFO: namespace svcaccounts-5199 deletion completed in 6.125030269s

• [SLOW TEST:18.715 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:41:41.616: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-1f6222f3-0654-4afa-8054-b599bf568c0a
STEP: Creating a pod to test consume secrets
Feb 17 14:41:41.714: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3fb28ddc-771e-40e6-8060-c65b4256fc60" in namespace "projected-6401" to be "success or failure"
Feb 17 14:41:42.527: INFO: Pod "pod-projected-secrets-3fb28ddc-771e-40e6-8060-c65b4256fc60": Phase="Pending", Reason="", readiness=false. Elapsed: 812.506005ms
Feb 17 14:41:44.554: INFO: Pod "pod-projected-secrets-3fb28ddc-771e-40e6-8060-c65b4256fc60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.840053709s
Feb 17 14:41:46.567: INFO: Pod "pod-projected-secrets-3fb28ddc-771e-40e6-8060-c65b4256fc60": Phase="Pending", Reason="", readiness=false. Elapsed: 4.853423553s
Feb 17 14:41:48.576: INFO: Pod "pod-projected-secrets-3fb28ddc-771e-40e6-8060-c65b4256fc60": Phase="Pending", Reason="", readiness=false. Elapsed: 6.861477555s
Feb 17 14:41:50.591: INFO: Pod "pod-projected-secrets-3fb28ddc-771e-40e6-8060-c65b4256fc60": Phase="Pending", Reason="", readiness=false. Elapsed: 8.877079611s
Feb 17 14:41:52.603: INFO: Pod "pod-projected-secrets-3fb28ddc-771e-40e6-8060-c65b4256fc60": Phase="Pending", Reason="", readiness=false. Elapsed: 10.889183603s
Feb 17 14:41:54.617: INFO: Pod "pod-projected-secrets-3fb28ddc-771e-40e6-8060-c65b4256fc60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.903317512s
STEP: Saw pod success
Feb 17 14:41:54.617: INFO: Pod "pod-projected-secrets-3fb28ddc-771e-40e6-8060-c65b4256fc60" satisfied condition "success or failure"
Feb 17 14:41:54.622: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-3fb28ddc-771e-40e6-8060-c65b4256fc60 container projected-secret-volume-test: 
STEP: delete the pod
Feb 17 14:41:54.775: INFO: Waiting for pod pod-projected-secrets-3fb28ddc-771e-40e6-8060-c65b4256fc60 to disappear
Feb 17 14:41:54.782: INFO: Pod pod-projected-secrets-3fb28ddc-771e-40e6-8060-c65b4256fc60 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:41:54.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6401" for this suite.
Feb 17 14:42:00.821: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:42:00.955: INFO: namespace projected-6401 deletion completed in 6.167041373s

• [SLOW TEST:19.339 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:42:00.956: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 17 14:42:01.379: INFO: Waiting up to 5m0s for pod "downwardapi-volume-92f9b142-d1fb-4b26-b8b4-ca554b65a71b" in namespace "downward-api-1271" to be "success or failure"
Feb 17 14:42:01.397: INFO: Pod "downwardapi-volume-92f9b142-d1fb-4b26-b8b4-ca554b65a71b": Phase="Pending", Reason="", readiness=false. Elapsed: 17.566404ms
Feb 17 14:42:03.407: INFO: Pod "downwardapi-volume-92f9b142-d1fb-4b26-b8b4-ca554b65a71b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028307523s
Feb 17 14:42:05.419: INFO: Pod "downwardapi-volume-92f9b142-d1fb-4b26-b8b4-ca554b65a71b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040333255s
Feb 17 14:42:07.427: INFO: Pod "downwardapi-volume-92f9b142-d1fb-4b26-b8b4-ca554b65a71b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048217626s
Feb 17 14:42:09.436: INFO: Pod "downwardapi-volume-92f9b142-d1fb-4b26-b8b4-ca554b65a71b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056485695s
Feb 17 14:42:11.444: INFO: Pod "downwardapi-volume-92f9b142-d1fb-4b26-b8b4-ca554b65a71b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.065094096s
STEP: Saw pod success
Feb 17 14:42:11.444: INFO: Pod "downwardapi-volume-92f9b142-d1fb-4b26-b8b4-ca554b65a71b" satisfied condition "success or failure"
Feb 17 14:42:11.449: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-92f9b142-d1fb-4b26-b8b4-ca554b65a71b container client-container: 
STEP: delete the pod
Feb 17 14:42:11.629: INFO: Waiting for pod downwardapi-volume-92f9b142-d1fb-4b26-b8b4-ca554b65a71b to disappear
Feb 17 14:42:11.653: INFO: Pod downwardapi-volume-92f9b142-d1fb-4b26-b8b4-ca554b65a71b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:42:11.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1271" for this suite.
Feb 17 14:42:17.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:42:17.920: INFO: namespace downward-api-1271 deletion completed in 6.25983366s

• [SLOW TEST:16.965 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:42:17.922: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-e50e56bc-ff77-4c7d-8031-28d1386e954e in namespace container-probe-8653
Feb 17 14:42:26.083: INFO: Started pod test-webserver-e50e56bc-ff77-4c7d-8031-28d1386e954e in namespace container-probe-8653
STEP: checking the pod's current state and verifying that restartCount is present
Feb 17 14:42:26.085: INFO: Initial restart count of pod test-webserver-e50e56bc-ff77-4c7d-8031-28d1386e954e is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:46:27.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8653" for this suite.
Feb 17 14:46:35.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:46:36.041: INFO: namespace container-probe-8653 deletion completed in 8.220208364s

• [SLOW TEST:258.119 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:46:36.041: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8152.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8152.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 17 14:46:50.237: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-8152/dns-test-6a767830-8045-4c0b-b9ae-ca1e3c6c90df: the server could not find the requested resource (get pods dns-test-6a767830-8045-4c0b-b9ae-ca1e3c6c90df)
Feb 17 14:46:50.252: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-8152/dns-test-6a767830-8045-4c0b-b9ae-ca1e3c6c90df: the server could not find the requested resource (get pods dns-test-6a767830-8045-4c0b-b9ae-ca1e3c6c90df)
Feb 17 14:46:50.256: INFO: Unable to read wheezy_udp@PodARecord from pod dns-8152/dns-test-6a767830-8045-4c0b-b9ae-ca1e3c6c90df: the server could not find the requested resource (get pods dns-test-6a767830-8045-4c0b-b9ae-ca1e3c6c90df)
Feb 17 14:46:50.260: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-8152/dns-test-6a767830-8045-4c0b-b9ae-ca1e3c6c90df: the server could not find the requested resource (get pods dns-test-6a767830-8045-4c0b-b9ae-ca1e3c6c90df)
Feb 17 14:46:50.265: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-8152/dns-test-6a767830-8045-4c0b-b9ae-ca1e3c6c90df: the server could not find the requested resource (get pods dns-test-6a767830-8045-4c0b-b9ae-ca1e3c6c90df)
Feb 17 14:46:50.271: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-8152/dns-test-6a767830-8045-4c0b-b9ae-ca1e3c6c90df: the server could not find the requested resource (get pods dns-test-6a767830-8045-4c0b-b9ae-ca1e3c6c90df)
Feb 17 14:46:50.275: INFO: Unable to read jessie_udp@PodARecord from pod dns-8152/dns-test-6a767830-8045-4c0b-b9ae-ca1e3c6c90df: the server could not find the requested resource (get pods dns-test-6a767830-8045-4c0b-b9ae-ca1e3c6c90df)
Feb 17 14:46:50.278: INFO: Unable to read jessie_tcp@PodARecord from pod dns-8152/dns-test-6a767830-8045-4c0b-b9ae-ca1e3c6c90df: the server could not find the requested resource (get pods dns-test-6a767830-8045-4c0b-b9ae-ca1e3c6c90df)
Feb 17 14:46:50.279: INFO: Lookups using dns-8152/dns-test-6a767830-8045-4c0b-b9ae-ca1e3c6c90df failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb 17 14:46:55.359: INFO: DNS probes using dns-8152/dns-test-6a767830-8045-4c0b-b9ae-ca1e3c6c90df succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:46:55.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8152" for this suite.
Feb 17 14:47:01.557: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:47:01.702: INFO: namespace dns-8152 deletion completed in 6.184707927s

• [SLOW TEST:25.660 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:47:01.702: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 17 14:47:01.846: INFO: Pod name rollover-pod: Found 0 pods out of 1
Feb 17 14:47:07.261: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 17 14:47:13.288: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Feb 17 14:47:15.297: INFO: Creating deployment "test-rollover-deployment"
Feb 17 14:47:15.388: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Feb 17 14:47:17.399: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Feb 17 14:47:17.410: INFO: Ensure that both replica sets have 1 created replica
Feb 17 14:47:17.419: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Feb 17 14:47:17.431: INFO: Updating deployment test-rollover-deployment
Feb 17 14:47:17.431: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Feb 17 14:47:19.467: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Feb 17 14:47:19.475: INFO: Make sure deployment "test-rollover-deployment" is complete
Feb 17 14:47:19.484: INFO: all replica sets need to contain the pod-template-hash label
Feb 17 14:47:19.484: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717547635, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717547635, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717547637, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717547635, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 17 14:47:21.501: INFO: all replica sets need to contain the pod-template-hash label
Feb 17 14:47:21.501: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717547635, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717547635, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717547637, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717547635, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 17 14:47:23.497: INFO: all replica sets need to contain the pod-template-hash label
Feb 17 14:47:23.497: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717547635, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717547635, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717547637, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717547635, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 17 14:47:25.496: INFO: all replica sets need to contain the pod-template-hash label
Feb 17 14:47:25.496: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717547635, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717547635, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717547637, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717547635, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 17 14:47:27.928: INFO: all replica sets need to contain the pod-template-hash label
Feb 17 14:47:27.929: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717547635, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717547635, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717547637, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717547635, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 17 14:47:29.497: INFO: all replica sets need to contain the pod-template-hash label
Feb 17 14:47:29.497: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717547635, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717547635, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717547647, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717547635, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 17 14:47:31.500: INFO: all replica sets need to contain the pod-template-hash label
Feb 17 14:47:31.500: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717547635, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717547635, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717547647, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717547635, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 17 14:47:33.497: INFO: all replica sets need to contain the pod-template-hash label
Feb 17 14:47:33.497: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717547635, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717547635, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717547647, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717547635, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 17 14:47:35.496: INFO: all replica sets need to contain the pod-template-hash label
Feb 17 14:47:35.497: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717547635, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717547635, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717547647, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717547635, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 17 14:47:37.497: INFO: 
Feb 17 14:47:37.497: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb 17 14:47:37.507: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-7743,SelfLink:/apis/apps/v1/namespaces/deployment-7743/deployments/test-rollover-deployment,UID:2ce37862-ba6a-4004-b96e-680db6a2631d,ResourceVersion:24711513,Generation:2,CreationTimestamp:2020-02-17 14:47:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-17 14:47:15 +0000 UTC 2020-02-17 14:47:15 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-17 14:47:37 +0000 UTC 2020-02-17 14:47:15 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb 17 14:47:37.510: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-7743,SelfLink:/apis/apps/v1/namespaces/deployment-7743/replicasets/test-rollover-deployment-854595fc44,UID:320a3d65-06e6-4cb4-a069-8c992c91a151,ResourceVersion:24711503,Generation:2,CreationTimestamp:2020-02-17 14:47:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 2ce37862-ba6a-4004-b96e-680db6a2631d 0xc001dff9a7 0xc001dff9a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb 17 14:47:37.510: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Feb 17 14:47:37.511: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-7743,SelfLink:/apis/apps/v1/namespaces/deployment-7743/replicasets/test-rollover-controller,UID:a071ac80-29d2-42d7-9d64-06f62cd20347,ResourceVersion:24711512,Generation:2,CreationTimestamp:2020-02-17 14:47:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 2ce37862-ba6a-4004-b96e-680db6a2631d 0xc001dff82f 0xc001dff8a0}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 17 14:47:37.511: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-7743,SelfLink:/apis/apps/v1/namespaces/deployment-7743/replicasets/test-rollover-deployment-9b8b997cf,UID:a7fb34f1-9067-4418-863c-891534cb06f1,ResourceVersion:24711465,Generation:2,CreationTimestamp:2020-02-17 14:47:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 2ce37862-ba6a-4004-b96e-680db6a2631d 0xc001dffa90 0xc001dffa91}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 17 14:47:37.515: INFO: Pod "test-rollover-deployment-854595fc44-dkvzf" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-dkvzf,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-7743,SelfLink:/api/v1/namespaces/deployment-7743/pods/test-rollover-deployment-854595fc44-dkvzf,UID:bb02411d-345e-4147-9732-7b0e3e0d081d,ResourceVersion:24711487,Generation:0,CreationTimestamp:2020-02-17 14:47:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 320a3d65-06e6-4cb4-a069-8c992c91a151 0xc002ef8977 0xc002ef8978}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wqqn4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wqqn4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-wqqn4 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ef89f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ef8a10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 14:47:17 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 14:47:26 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 14:47:26 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 14:47:17 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-17 14:47:17 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-17 14:47:25 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://cecf260b70a14e8aad6ab885b52a7bc98ca8515963ba0cffaade9e219e1d6e32}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:47:37.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7743" for this suite.
Feb 17 14:47:43.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:47:43.637: INFO: namespace deployment-7743 deletion completed in 6.117341825s

• [SLOW TEST:41.935 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:47:43.638: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb 17 14:47:43.955: INFO: Waiting up to 5m0s for pod "pod-40c56a3c-ec2d-49f0-91e5-2e86e4dc9b3b" in namespace "emptydir-717" to be "success or failure"
Feb 17 14:47:43.967: INFO: Pod "pod-40c56a3c-ec2d-49f0-91e5-2e86e4dc9b3b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.637342ms
Feb 17 14:47:45.974: INFO: Pod "pod-40c56a3c-ec2d-49f0-91e5-2e86e4dc9b3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019356068s
Feb 17 14:47:47.989: INFO: Pod "pod-40c56a3c-ec2d-49f0-91e5-2e86e4dc9b3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033611231s
Feb 17 14:47:50.035: INFO: Pod "pod-40c56a3c-ec2d-49f0-91e5-2e86e4dc9b3b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079680619s
Feb 17 14:47:52.050: INFO: Pod "pod-40c56a3c-ec2d-49f0-91e5-2e86e4dc9b3b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.094835498s
Feb 17 14:47:54.072: INFO: Pod "pod-40c56a3c-ec2d-49f0-91e5-2e86e4dc9b3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.116897515s
STEP: Saw pod success
Feb 17 14:47:54.073: INFO: Pod "pod-40c56a3c-ec2d-49f0-91e5-2e86e4dc9b3b" satisfied condition "success or failure"
Feb 17 14:47:54.085: INFO: Trying to get logs from node iruya-node pod pod-40c56a3c-ec2d-49f0-91e5-2e86e4dc9b3b container test-container: 
STEP: delete the pod
Feb 17 14:47:54.246: INFO: Waiting for pod pod-40c56a3c-ec2d-49f0-91e5-2e86e4dc9b3b to disappear
Feb 17 14:47:54.251: INFO: Pod pod-40c56a3c-ec2d-49f0-91e5-2e86e4dc9b3b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:47:54.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-717" for this suite.
Feb 17 14:48:02.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:48:02.514: INFO: namespace emptydir-717 deletion completed in 8.258388763s

• [SLOW TEST:18.876 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:48:02.515: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0217 14:48:15.667617       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 17 14:48:15.667: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:48:15.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8728" for this suite.
Feb 17 14:48:23.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:48:24.053: INFO: namespace gc-8728 deletion completed in 8.382249279s

• [SLOW TEST:21.539 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:48:24.055: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 17 14:48:24.255: INFO: Waiting up to 5m0s for pod "pod-3d113661-15bd-468f-95b1-29a16961c911" in namespace "emptydir-4127" to be "success or failure"
Feb 17 14:48:24.392: INFO: Pod "pod-3d113661-15bd-468f-95b1-29a16961c911": Phase="Pending", Reason="", readiness=false. Elapsed: 136.771614ms
Feb 17 14:48:26.402: INFO: Pod "pod-3d113661-15bd-468f-95b1-29a16961c911": Phase="Pending", Reason="", readiness=false. Elapsed: 2.147098512s
Feb 17 14:48:28.410: INFO: Pod "pod-3d113661-15bd-468f-95b1-29a16961c911": Phase="Pending", Reason="", readiness=false. Elapsed: 4.154278866s
Feb 17 14:48:30.518: INFO: Pod "pod-3d113661-15bd-468f-95b1-29a16961c911": Phase="Pending", Reason="", readiness=false. Elapsed: 6.263038619s
Feb 17 14:48:32.546: INFO: Pod "pod-3d113661-15bd-468f-95b1-29a16961c911": Phase="Pending", Reason="", readiness=false. Elapsed: 8.290897405s
Feb 17 14:48:34.559: INFO: Pod "pod-3d113661-15bd-468f-95b1-29a16961c911": Phase="Pending", Reason="", readiness=false. Elapsed: 10.304189711s
Feb 17 14:48:36.577: INFO: Pod "pod-3d113661-15bd-468f-95b1-29a16961c911": Phase="Pending", Reason="", readiness=false. Elapsed: 12.321455278s
Feb 17 14:48:38.587: INFO: Pod "pod-3d113661-15bd-468f-95b1-29a16961c911": Phase="Pending", Reason="", readiness=false. Elapsed: 14.331257923s
Feb 17 14:48:40.595: INFO: Pod "pod-3d113661-15bd-468f-95b1-29a16961c911": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.339419884s
STEP: Saw pod success
Feb 17 14:48:40.595: INFO: Pod "pod-3d113661-15bd-468f-95b1-29a16961c911" satisfied condition "success or failure"
Feb 17 14:48:40.599: INFO: Trying to get logs from node iruya-node pod pod-3d113661-15bd-468f-95b1-29a16961c911 container test-container: 
STEP: delete the pod
Feb 17 14:48:41.011: INFO: Waiting for pod pod-3d113661-15bd-468f-95b1-29a16961c911 to disappear
Feb 17 14:48:41.022: INFO: Pod pod-3d113661-15bd-468f-95b1-29a16961c911 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:48:41.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4127" for this suite.
Feb 17 14:48:47.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:48:47.197: INFO: namespace emptydir-4127 deletion completed in 6.107523543s

• [SLOW TEST:23.142 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:48:47.197: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-6111
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6111 to expose endpoints map[]
Feb 17 14:48:47.379: INFO: Get endpoints failed (16.788894ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Feb 17 14:48:48.388: INFO: successfully validated that service multi-endpoint-test in namespace services-6111 exposes endpoints map[] (1.026149972s elapsed)
STEP: Creating pod pod1 in namespace services-6111
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6111 to expose endpoints map[pod1:[100]]
Feb 17 14:48:52.567: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.144946786s elapsed, will retry)
Feb 17 14:48:56.668: INFO: successfully validated that service multi-endpoint-test in namespace services-6111 exposes endpoints map[pod1:[100]] (8.245599856s elapsed)
STEP: Creating pod pod2 in namespace services-6111
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6111 to expose endpoints map[pod1:[100] pod2:[101]]
Feb 17 14:49:01.324: INFO: Unexpected endpoints: found map[51824d4f-ed25-4dfa-b684-04d0441de96e:[100]], expected map[pod1:[100] pod2:[101]] (4.646887946s elapsed, will retry)
Feb 17 14:49:03.359: INFO: successfully validated that service multi-endpoint-test in namespace services-6111 exposes endpoints map[pod1:[100] pod2:[101]] (6.68192123s elapsed)
STEP: Deleting pod pod1 in namespace services-6111
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6111 to expose endpoints map[pod2:[101]]
Feb 17 14:49:04.433: INFO: successfully validated that service multi-endpoint-test in namespace services-6111 exposes endpoints map[pod2:[101]] (1.064795958s elapsed)
STEP: Deleting pod pod2 in namespace services-6111
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6111 to expose endpoints map[]
Feb 17 14:49:04.593: INFO: successfully validated that service multi-endpoint-test in namespace services-6111 exposes endpoints map[] (68.455622ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:49:04.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6111" for this suite.
Feb 17 14:49:26.765: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:49:26.871: INFO: namespace services-6111 deletion completed in 22.142315346s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:39.674 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:49:26.871: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:50:22.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5343" for this suite.
Feb 17 14:50:28.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:50:28.447: INFO: namespace container-runtime-5343 deletion completed in 6.167166607s

• [SLOW TEST:61.576 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:50:28.448: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 17 14:50:28.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2504'
Feb 17 14:50:29.097: INFO: stderr: ""
Feb 17 14:50:29.097: INFO: stdout: "replicationcontroller/redis-master created\n"
Feb 17 14:50:29.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2504'
Feb 17 14:50:29.436: INFO: stderr: ""
Feb 17 14:50:29.436: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb 17 14:50:30.453: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 14:50:30.453: INFO: Found 0 / 1
Feb 17 14:50:31.770: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 14:50:31.770: INFO: Found 0 / 1
Feb 17 14:50:32.443: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 14:50:32.443: INFO: Found 0 / 1
Feb 17 14:50:33.443: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 14:50:33.444: INFO: Found 0 / 1
Feb 17 14:50:34.445: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 14:50:34.445: INFO: Found 0 / 1
Feb 17 14:50:35.455: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 14:50:35.455: INFO: Found 0 / 1
Feb 17 14:50:36.446: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 14:50:36.447: INFO: Found 0 / 1
Feb 17 14:50:37.445: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 14:50:37.445: INFO: Found 0 / 1
Feb 17 14:50:38.447: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 14:50:38.447: INFO: Found 1 / 1
Feb 17 14:50:38.447: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 17 14:50:38.452: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 14:50:38.452: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 17 14:50:38.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-f2gj7 --namespace=kubectl-2504'
Feb 17 14:50:38.656: INFO: stderr: ""
Feb 17 14:50:38.657: INFO: stdout: "Name:           redis-master-f2gj7\nNamespace:      kubectl-2504\nPriority:       0\nNode:           iruya-node/10.96.3.65\nStart Time:     Mon, 17 Feb 2020 14:50:29 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.44.0.1\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://a8562743d459cd6a1d98990aea46edc8c7a2b6d743f6171a28de026285c79a9d\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Mon, 17 Feb 2020 14:50:37 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rdr8s (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-rdr8s:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-rdr8s\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                 Message\n  ----    ------     ----  ----                 -------\n  Normal  Scheduled  9s    default-scheduler    Successfully assigned kubectl-2504/redis-master-f2gj7 to iruya-node\n  Normal  Pulled     5s    kubelet, iruya-node  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    1s    kubelet, iruya-node  Created container redis-master\n  Normal  Started    1s    kubelet, iruya-node  Started container redis-master\n"
Feb 17 14:50:38.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-2504'
Feb 17 14:50:38.792: INFO: stderr: ""
Feb 17 14:50:38.792: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-2504\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  9s    replication-controller  Created pod: redis-master-f2gj7\n"
Feb 17 14:50:38.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-2504'
Feb 17 14:50:38.920: INFO: stderr: ""
Feb 17 14:50:38.920: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-2504\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.106.2.244\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Feb 17 14:50:38.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node'
Feb 17 14:50:39.049: INFO: stderr: ""
Feb 17 14:50:39.049: INFO: stdout: "Name:               iruya-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 04 Aug 2019 09:01:39 +0000\nTaints:             \nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 12 Oct 2019 11:56:49 +0000   Sat, 12 Oct 2019 11:56:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Mon, 17 Feb 2020 14:50:25 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Mon, 17 Feb 2020 14:50:25 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Mon, 17 Feb 2020 14:50:25 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Mon, 17 Feb 2020 14:50:25 +0000   Sun, 04 Aug 2019 09:02:19 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.3.65\n  Hostname:    iruya-node\nCapacity:\n cpu:                4\n ephemeral-storage:  20145724Ki\n hugepages-2Mi:      0\n memory:             4039076Ki\n pods:               110\nAllocatable:\n cpu:                4\n ephemeral-storage:  18566299208\n hugepages-2Mi:      0\n memory:             3936676Ki\n pods:               110\nSystem Info:\n Machine ID:                 f573dcf04d6f4a87856a35d266a2fa7a\n System UUID:                F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID:                    8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version:             4.15.0-52-generic\n OS Image:                   Ubuntu 18.04.2 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.9.7\n Kubelet Version:            v1.15.1\n Kube-Proxy Version:         v1.15.1\nPodCIDR:                     10.96.1.0/24\nNon-terminated Pods:         (3 in total)\n  Namespace                  Name                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                  ------------  ----------  ---------------  -------------  ---\n  kube-system                kube-proxy-976zl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         197d\n  kube-system                weave-net-rlp57       20m (0%)      0 (0%)      0 (0%)           0 (0%)         128d\n  kubectl-2504               redis-master-f2gj7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Feb 17 14:50:39.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-2504'
Feb 17 14:50:39.125: INFO: stderr: ""
Feb 17 14:50:39.125: INFO: stdout: "Name:         kubectl-2504\nLabels:       e2e-framework=kubectl\n              e2e-run=4fc5e0ed-9db9-4734-b85d-1fa5d7dff60b\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:50:39.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2504" for this suite.
Feb 17 14:51:01.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:51:01.238: INFO: namespace kubectl-2504 deletion completed in 22.109272651s

• [SLOW TEST:32.791 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:51:01.239: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-8b614176-882e-4572-9174-bbcbb7d92539
STEP: Creating a pod to test consume secrets
Feb 17 14:51:01.420: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-aaa16d38-f04a-4f83-bf29-255e553d68d0" in namespace "projected-9807" to be "success or failure"
Feb 17 14:51:01.437: INFO: Pod "pod-projected-secrets-aaa16d38-f04a-4f83-bf29-255e553d68d0": Phase="Pending", Reason="", readiness=false. Elapsed: 17.580546ms
Feb 17 14:51:03.456: INFO: Pod "pod-projected-secrets-aaa16d38-f04a-4f83-bf29-255e553d68d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036424287s
Feb 17 14:51:05.475: INFO: Pod "pod-projected-secrets-aaa16d38-f04a-4f83-bf29-255e553d68d0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055285063s
Feb 17 14:51:07.486: INFO: Pod "pod-projected-secrets-aaa16d38-f04a-4f83-bf29-255e553d68d0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066098467s
Feb 17 14:51:09.494: INFO: Pod "pod-projected-secrets-aaa16d38-f04a-4f83-bf29-255e553d68d0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.074660253s
Feb 17 14:51:11.502: INFO: Pod "pod-projected-secrets-aaa16d38-f04a-4f83-bf29-255e553d68d0": Phase="Running", Reason="", readiness=true. Elapsed: 10.082731856s
Feb 17 14:51:13.510: INFO: Pod "pod-projected-secrets-aaa16d38-f04a-4f83-bf29-255e553d68d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.090786178s
STEP: Saw pod success
Feb 17 14:51:13.511: INFO: Pod "pod-projected-secrets-aaa16d38-f04a-4f83-bf29-255e553d68d0" satisfied condition "success or failure"
Feb 17 14:51:13.515: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-aaa16d38-f04a-4f83-bf29-255e553d68d0 container projected-secret-volume-test: 
STEP: delete the pod
Feb 17 14:51:13.580: INFO: Waiting for pod pod-projected-secrets-aaa16d38-f04a-4f83-bf29-255e553d68d0 to disappear
Feb 17 14:51:13.588: INFO: Pod pod-projected-secrets-aaa16d38-f04a-4f83-bf29-255e553d68d0 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:51:13.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9807" for this suite.
Feb 17 14:51:19.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:51:19.885: INFO: namespace projected-9807 deletion completed in 6.288635905s

• [SLOW TEST:18.646 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:51:19.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Feb 17 14:51:19.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8004 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Feb 17 14:51:30.075: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0217 14:51:28.791206    3203 log.go:172] (0xc0009c4210) (0xc000a3c140) Create stream\nI0217 14:51:28.791362    3203 log.go:172] (0xc0009c4210) (0xc000a3c140) Stream added, broadcasting: 1\nI0217 14:51:28.797253    3203 log.go:172] (0xc0009c4210) Reply frame received for 1\nI0217 14:51:28.797283    3203 log.go:172] (0xc0009c4210) (0xc000a3c280) Create stream\nI0217 14:51:28.797290    3203 log.go:172] (0xc0009c4210) (0xc000a3c280) Stream added, broadcasting: 3\nI0217 14:51:28.798218    3203 log.go:172] (0xc0009c4210) Reply frame received for 3\nI0217 14:51:28.798244    3203 log.go:172] (0xc0009c4210) (0xc000646a00) Create stream\nI0217 14:51:28.798253    3203 log.go:172] (0xc0009c4210) (0xc000646a00) Stream added, broadcasting: 5\nI0217 14:51:28.799452    3203 log.go:172] (0xc0009c4210) Reply frame received for 5\nI0217 14:51:28.799520    3203 log.go:172] (0xc0009c4210) (0xc0003aa000) Create stream\nI0217 14:51:28.799531    3203 log.go:172] (0xc0009c4210) (0xc0003aa000) Stream added, broadcasting: 7\nI0217 14:51:28.800739    3203 log.go:172] (0xc0009c4210) Reply frame received for 7\nI0217 14:51:28.801296    3203 log.go:172] (0xc000a3c280) (3) Writing data frame\nI0217 14:51:28.801424    3203 log.go:172] (0xc000a3c280) (3) Writing data frame\nI0217 14:51:28.806322    3203 log.go:172] (0xc0009c4210) Data frame received for 5\nI0217 14:51:28.806340    3203 log.go:172] (0xc000646a00) (5) Data frame handling\nI0217 14:51:28.806347    3203 log.go:172] (0xc000646a00) (5) Data frame sent\nI0217 14:51:28.810638    3203 log.go:172] (0xc0009c4210) Data frame received for 5\nI0217 14:51:28.810650    3203 log.go:172] (0xc000646a00) (5) Data frame handling\nI0217 14:51:28.810663    3203 log.go:172] (0xc000646a00) (5) Data frame sent\nI0217 14:51:30.004368    3203 log.go:172] (0xc0009c4210) Data frame received for 1\nI0217 14:51:30.004426    3203 log.go:172] (0xc0009c4210) (0xc000a3c280) Stream removed, broadcasting: 3\nI0217 14:51:30.004464    3203 log.go:172] (0xc000a3c140) (1) Data frame handling\nI0217 14:51:30.004475    3203 log.go:172] (0xc000a3c140) (1) Data frame sent\nI0217 14:51:30.004513    3203 log.go:172] (0xc0009c4210) (0xc000646a00) Stream removed, broadcasting: 5\nI0217 14:51:30.004565    3203 log.go:172] (0xc0009c4210) (0xc000a3c140) Stream removed, broadcasting: 1\nI0217 14:51:30.004713    3203 log.go:172] (0xc0009c4210) (0xc0003aa000) Stream removed, broadcasting: 7\nI0217 14:51:30.004752    3203 log.go:172] (0xc0009c4210) Go away received\nI0217 14:51:30.005068    3203 log.go:172] (0xc0009c4210) (0xc000a3c140) Stream removed, broadcasting: 1\nI0217 14:51:30.005090    3203 log.go:172] (0xc0009c4210) (0xc000a3c280) Stream removed, broadcasting: 3\nI0217 14:51:30.005106    3203 log.go:172] (0xc0009c4210) (0xc000646a00) Stream removed, broadcasting: 5\nI0217 14:51:30.005119    3203 log.go:172] (0xc0009c4210) (0xc0003aa000) Stream removed, broadcasting: 7\n"
Feb 17 14:51:30.075: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:51:32.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8004" for this suite.
Feb 17 14:51:38.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:51:38.327: INFO: namespace kubectl-8004 deletion completed in 6.227921018s

• [SLOW TEST:18.441 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:51:38.327: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-ba84f6eb-5458-4261-ae56-4e8400d96051
STEP: Creating a pod to test consume configMaps
Feb 17 14:51:38.514: INFO: Waiting up to 5m0s for pod "pod-configmaps-73b7e8ea-a4a2-4589-977b-42f8288535a3" in namespace "configmap-8661" to be "success or failure"
Feb 17 14:51:38.530: INFO: Pod "pod-configmaps-73b7e8ea-a4a2-4589-977b-42f8288535a3": Phase="Pending", Reason="", readiness=false. Elapsed: 16.094281ms
Feb 17 14:51:40.544: INFO: Pod "pod-configmaps-73b7e8ea-a4a2-4589-977b-42f8288535a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03012689s
Feb 17 14:51:42.555: INFO: Pod "pod-configmaps-73b7e8ea-a4a2-4589-977b-42f8288535a3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040555408s
Feb 17 14:51:44.572: INFO: Pod "pod-configmaps-73b7e8ea-a4a2-4589-977b-42f8288535a3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058274009s
Feb 17 14:51:46.587: INFO: Pod "pod-configmaps-73b7e8ea-a4a2-4589-977b-42f8288535a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.072977036s
STEP: Saw pod success
Feb 17 14:51:46.587: INFO: Pod "pod-configmaps-73b7e8ea-a4a2-4589-977b-42f8288535a3" satisfied condition "success or failure"
Feb 17 14:51:46.592: INFO: Trying to get logs from node iruya-node pod pod-configmaps-73b7e8ea-a4a2-4589-977b-42f8288535a3 container configmap-volume-test: 
STEP: delete the pod
Feb 17 14:51:46.650: INFO: Waiting for pod pod-configmaps-73b7e8ea-a4a2-4589-977b-42f8288535a3 to disappear
Feb 17 14:51:46.754: INFO: Pod pod-configmaps-73b7e8ea-a4a2-4589-977b-42f8288535a3 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:51:46.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8661" for this suite.
Feb 17 14:51:52.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:51:52.892: INFO: namespace configmap-8661 deletion completed in 6.127944202s

• [SLOW TEST:14.565 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:51:52.892: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:52:25.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-7646" for this suite.
Feb 17 14:52:31.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:52:31.454: INFO: namespace namespaces-7646 deletion completed in 6.167389584s
STEP: Destroying namespace "nsdeletetest-4398" for this suite.
Feb 17 14:52:31.459: INFO: Namespace nsdeletetest-4398 was already deleted
STEP: Destroying namespace "nsdeletetest-8438" for this suite.
Feb 17 14:52:37.510: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:52:37.639: INFO: namespace nsdeletetest-8438 deletion completed in 6.180221704s

• [SLOW TEST:44.747 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:52:37.639: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb 17 14:52:37.736: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:52:51.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-95" for this suite.
Feb 17 14:52:57.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:52:57.460: INFO: namespace init-container-95 deletion completed in 6.285491208s

• [SLOW TEST:19.820 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:52:57.460: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb 17 14:52:57.579: INFO: Waiting up to 5m0s for pod "pod-38c9ff25-8321-4372-8cdb-9c716731d0c9" in namespace "emptydir-1421" to be "success or failure"
Feb 17 14:52:57.584: INFO: Pod "pod-38c9ff25-8321-4372-8cdb-9c716731d0c9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.498871ms
Feb 17 14:52:59.594: INFO: Pod "pod-38c9ff25-8321-4372-8cdb-9c716731d0c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014459322s
Feb 17 14:53:01.602: INFO: Pod "pod-38c9ff25-8321-4372-8cdb-9c716731d0c9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022340506s
Feb 17 14:53:03.619: INFO: Pod "pod-38c9ff25-8321-4372-8cdb-9c716731d0c9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039493808s
Feb 17 14:53:05.631: INFO: Pod "pod-38c9ff25-8321-4372-8cdb-9c716731d0c9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.051522759s
Feb 17 14:53:07.640: INFO: Pod "pod-38c9ff25-8321-4372-8cdb-9c716731d0c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.06081088s
STEP: Saw pod success
Feb 17 14:53:07.640: INFO: Pod "pod-38c9ff25-8321-4372-8cdb-9c716731d0c9" satisfied condition "success or failure"
Feb 17 14:53:07.645: INFO: Trying to get logs from node iruya-node pod pod-38c9ff25-8321-4372-8cdb-9c716731d0c9 container test-container: 
STEP: delete the pod
Feb 17 14:53:07.848: INFO: Waiting for pod pod-38c9ff25-8321-4372-8cdb-9c716731d0c9 to disappear
Feb 17 14:53:07.861: INFO: Pod pod-38c9ff25-8321-4372-8cdb-9c716731d0c9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:53:07.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1421" for this suite.
Feb 17 14:53:13.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:53:14.108: INFO: namespace emptydir-1421 deletion completed in 6.231048372s

• [SLOW TEST:16.648 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:53:14.108: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 17 14:53:14.337: INFO: Waiting up to 5m0s for pod "downwardapi-volume-87255a2a-63c2-4bee-b7a8-9f7fa2582c81" in namespace "downward-api-2380" to be "success or failure"
Feb 17 14:53:14.345: INFO: Pod "downwardapi-volume-87255a2a-63c2-4bee-b7a8-9f7fa2582c81": Phase="Pending", Reason="", readiness=false. Elapsed: 8.095527ms
Feb 17 14:53:16.354: INFO: Pod "downwardapi-volume-87255a2a-63c2-4bee-b7a8-9f7fa2582c81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01675902s
Feb 17 14:53:18.366: INFO: Pod "downwardapi-volume-87255a2a-63c2-4bee-b7a8-9f7fa2582c81": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029202255s
Feb 17 14:53:20.375: INFO: Pod "downwardapi-volume-87255a2a-63c2-4bee-b7a8-9f7fa2582c81": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037573485s
Feb 17 14:53:22.384: INFO: Pod "downwardapi-volume-87255a2a-63c2-4bee-b7a8-9f7fa2582c81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04684667s
STEP: Saw pod success
Feb 17 14:53:22.384: INFO: Pod "downwardapi-volume-87255a2a-63c2-4bee-b7a8-9f7fa2582c81" satisfied condition "success or failure"
Feb 17 14:53:22.387: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-87255a2a-63c2-4bee-b7a8-9f7fa2582c81 container client-container: 
STEP: delete the pod
Feb 17 14:53:22.422: INFO: Waiting for pod downwardapi-volume-87255a2a-63c2-4bee-b7a8-9f7fa2582c81 to disappear
Feb 17 14:53:22.447: INFO: Pod downwardapi-volume-87255a2a-63c2-4bee-b7a8-9f7fa2582c81 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:53:22.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2380" for this suite.
Feb 17 14:53:28.498: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:53:28.688: INFO: namespace downward-api-2380 deletion completed in 6.228167666s

• [SLOW TEST:14.579 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:53:28.688: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Feb 17 14:53:28.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8689'
Feb 17 14:53:31.090: INFO: stderr: ""
Feb 17 14:53:31.090: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Feb 17 14:53:32.105: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 14:53:32.105: INFO: Found 0 / 1
Feb 17 14:53:33.099: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 14:53:33.100: INFO: Found 0 / 1
Feb 17 14:53:34.097: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 14:53:34.097: INFO: Found 0 / 1
Feb 17 14:53:35.098: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 14:53:35.098: INFO: Found 0 / 1
Feb 17 14:53:36.098: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 14:53:36.098: INFO: Found 0 / 1
Feb 17 14:53:37.101: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 14:53:37.101: INFO: Found 0 / 1
Feb 17 14:53:38.099: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 14:53:38.099: INFO: Found 0 / 1
Feb 17 14:53:39.101: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 14:53:39.101: INFO: Found 1 / 1
Feb 17 14:53:39.101: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 17 14:53:39.113: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 14:53:39.113: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Feb 17 14:53:39.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-tjcbx redis-master --namespace=kubectl-8689'
Feb 17 14:53:39.255: INFO: stderr: ""
Feb 17 14:53:39.255: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 17 Feb 14:53:38.290 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 17 Feb 14:53:38.290 # Server started, Redis version 3.2.12\n1:M 17 Feb 14:53:38.290 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 17 Feb 14:53:38.290 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Feb 17 14:53:39.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-tjcbx redis-master --namespace=kubectl-8689 --tail=1'
Feb 17 14:53:39.405: INFO: stderr: ""
Feb 17 14:53:39.405: INFO: stdout: "1:M 17 Feb 14:53:38.290 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Feb 17 14:53:39.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-tjcbx redis-master --namespace=kubectl-8689 --limit-bytes=1'
Feb 17 14:53:39.570: INFO: stderr: ""
Feb 17 14:53:39.570: INFO: stdout: " "
STEP: exposing timestamps
Feb 17 14:53:39.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-tjcbx redis-master --namespace=kubectl-8689 --tail=1 --timestamps'
Feb 17 14:53:39.669: INFO: stderr: ""
Feb 17 14:53:39.670: INFO: stdout: "2020-02-17T14:53:38.290893615Z 1:M 17 Feb 14:53:38.290 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Feb 17 14:53:42.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-tjcbx redis-master --namespace=kubectl-8689 --since=1s'
Feb 17 14:53:42.393: INFO: stderr: ""
Feb 17 14:53:42.393: INFO: stdout: ""
Feb 17 14:53:42.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-tjcbx redis-master --namespace=kubectl-8689 --since=24h'
Feb 17 14:53:42.517: INFO: stderr: ""
Feb 17 14:53:42.517: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 17 Feb 14:53:38.290 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 17 Feb 14:53:38.290 # Server started, Redis version 3.2.12\n1:M 17 Feb 14:53:38.290 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 17 Feb 14:53:38.290 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Feb 17 14:53:42.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8689'
Feb 17 14:53:42.625: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 17 14:53:42.625: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Feb 17 14:53:42.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-8689'
Feb 17 14:53:42.721: INFO: stderr: "No resources found.\n"
Feb 17 14:53:42.721: INFO: stdout: ""
Feb 17 14:53:42.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-8689 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 17 14:53:42.804: INFO: stderr: ""
Feb 17 14:53:42.804: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:53:42.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8689" for this suite.
Feb 17 14:54:04.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:54:05.047: INFO: namespace kubectl-8689 deletion completed in 22.237674939s

• [SLOW TEST:36.359 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:54:05.048: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-978998f3-0fbf-4457-aa1d-884ab4b7fd16
STEP: Creating secret with name secret-projected-all-test-volume-64b86a37-d969-47bb-be92-5723d3a27506
STEP: Creating a pod to test Check all projections for projected volume plugin
Feb 17 14:54:05.338: INFO: Waiting up to 5m0s for pod "projected-volume-b065abb3-a018-4208-949a-e24269dd59f8" in namespace "projected-7610" to be "success or failure"
Feb 17 14:54:05.392: INFO: Pod "projected-volume-b065abb3-a018-4208-949a-e24269dd59f8": Phase="Pending", Reason="", readiness=false. Elapsed: 54.662793ms
Feb 17 14:54:07.399: INFO: Pod "projected-volume-b065abb3-a018-4208-949a-e24269dd59f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061565509s
Feb 17 14:54:09.411: INFO: Pod "projected-volume-b065abb3-a018-4208-949a-e24269dd59f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073634644s
Feb 17 14:54:11.420: INFO: Pod "projected-volume-b065abb3-a018-4208-949a-e24269dd59f8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08245504s
Feb 17 14:54:13.428: INFO: Pod "projected-volume-b065abb3-a018-4208-949a-e24269dd59f8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.090313639s
Feb 17 14:54:15.434: INFO: Pod "projected-volume-b065abb3-a018-4208-949a-e24269dd59f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.096296327s
STEP: Saw pod success
Feb 17 14:54:15.434: INFO: Pod "projected-volume-b065abb3-a018-4208-949a-e24269dd59f8" satisfied condition "success or failure"
Feb 17 14:54:15.438: INFO: Trying to get logs from node iruya-node pod projected-volume-b065abb3-a018-4208-949a-e24269dd59f8 container projected-all-volume-test: 
STEP: delete the pod
Feb 17 14:54:15.582: INFO: Waiting for pod projected-volume-b065abb3-a018-4208-949a-e24269dd59f8 to disappear
Feb 17 14:54:15.593: INFO: Pod projected-volume-b065abb3-a018-4208-949a-e24269dd59f8 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:54:15.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7610" for this suite.
Feb 17 14:54:21.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:54:21.756: INFO: namespace projected-7610 deletion completed in 6.156768087s

• [SLOW TEST:16.708 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:54:21.756: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 17 14:54:21.881: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c75a23fe-c784-4a5c-863b-765eca48bf26" in namespace "projected-1961" to be "success or failure"
Feb 17 14:54:21.891: INFO: Pod "downwardapi-volume-c75a23fe-c784-4a5c-863b-765eca48bf26": Phase="Pending", Reason="", readiness=false. Elapsed: 9.610689ms
Feb 17 14:54:23.911: INFO: Pod "downwardapi-volume-c75a23fe-c784-4a5c-863b-765eca48bf26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029942249s
Feb 17 14:54:25.925: INFO: Pod "downwardapi-volume-c75a23fe-c784-4a5c-863b-765eca48bf26": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04381247s
Feb 17 14:54:27.936: INFO: Pod "downwardapi-volume-c75a23fe-c784-4a5c-863b-765eca48bf26": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054938317s
Feb 17 14:54:29.943: INFO: Pod "downwardapi-volume-c75a23fe-c784-4a5c-863b-765eca48bf26": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062251856s
Feb 17 14:54:31.953: INFO: Pod "downwardapi-volume-c75a23fe-c784-4a5c-863b-765eca48bf26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.071525345s
STEP: Saw pod success
Feb 17 14:54:31.953: INFO: Pod "downwardapi-volume-c75a23fe-c784-4a5c-863b-765eca48bf26" satisfied condition "success or failure"
Feb 17 14:54:31.956: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-c75a23fe-c784-4a5c-863b-765eca48bf26 container client-container: 
STEP: delete the pod
Feb 17 14:54:32.055: INFO: Waiting for pod downwardapi-volume-c75a23fe-c784-4a5c-863b-765eca48bf26 to disappear
Feb 17 14:54:32.062: INFO: Pod downwardapi-volume-c75a23fe-c784-4a5c-863b-765eca48bf26 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:54:32.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1961" for this suite.
Feb 17 14:54:38.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:54:38.397: INFO: namespace projected-1961 deletion completed in 6.329090686s

• [SLOW TEST:16.641 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:54:38.398: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-d3c86da0-2871-434d-857b-5b0ff10bb8d9
STEP: Creating a pod to test consume secrets
Feb 17 14:54:38.552: INFO: Waiting up to 5m0s for pod "pod-secrets-add83892-9c05-4129-b973-a5978d58b850" in namespace "secrets-2766" to be "success or failure"
Feb 17 14:54:38.559: INFO: Pod "pod-secrets-add83892-9c05-4129-b973-a5978d58b850": Phase="Pending", Reason="", readiness=false. Elapsed: 7.145055ms
Feb 17 14:54:40.573: INFO: Pod "pod-secrets-add83892-9c05-4129-b973-a5978d58b850": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021472408s
Feb 17 14:54:42.587: INFO: Pod "pod-secrets-add83892-9c05-4129-b973-a5978d58b850": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034694662s
Feb 17 14:54:44.597: INFO: Pod "pod-secrets-add83892-9c05-4129-b973-a5978d58b850": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04562172s
Feb 17 14:54:46.648: INFO: Pod "pod-secrets-add83892-9c05-4129-b973-a5978d58b850": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.095882246s
STEP: Saw pod success
Feb 17 14:54:46.648: INFO: Pod "pod-secrets-add83892-9c05-4129-b973-a5978d58b850" satisfied condition "success or failure"
Feb 17 14:54:46.655: INFO: Trying to get logs from node iruya-node pod pod-secrets-add83892-9c05-4129-b973-a5978d58b850 container secret-volume-test: 
STEP: delete the pod
Feb 17 14:54:46.832: INFO: Waiting for pod pod-secrets-add83892-9c05-4129-b973-a5978d58b850 to disappear
Feb 17 14:54:46.844: INFO: Pod pod-secrets-add83892-9c05-4129-b973-a5978d58b850 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:54:46.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2766" for this suite.
Feb 17 14:54:52.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:54:53.148: INFO: namespace secrets-2766 deletion completed in 6.288807397s

• [SLOW TEST:14.751 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:54:53.150: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 17 14:54:53.281: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e7d18b4a-3d6e-4892-bc16-eb5c3e953f92" in namespace "downward-api-7567" to be "success or failure"
Feb 17 14:54:53.290: INFO: Pod "downwardapi-volume-e7d18b4a-3d6e-4892-bc16-eb5c3e953f92": Phase="Pending", Reason="", readiness=false. Elapsed: 9.508378ms
Feb 17 14:54:55.300: INFO: Pod "downwardapi-volume-e7d18b4a-3d6e-4892-bc16-eb5c3e953f92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019141522s
Feb 17 14:54:57.308: INFO: Pod "downwardapi-volume-e7d18b4a-3d6e-4892-bc16-eb5c3e953f92": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026925221s
Feb 17 14:54:59.317: INFO: Pod "downwardapi-volume-e7d18b4a-3d6e-4892-bc16-eb5c3e953f92": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036397835s
Feb 17 14:55:01.325: INFO: Pod "downwardapi-volume-e7d18b4a-3d6e-4892-bc16-eb5c3e953f92": Phase="Pending", Reason="", readiness=false. Elapsed: 8.04467661s
Feb 17 14:55:03.333: INFO: Pod "downwardapi-volume-e7d18b4a-3d6e-4892-bc16-eb5c3e953f92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.052757534s
STEP: Saw pod success
Feb 17 14:55:03.334: INFO: Pod "downwardapi-volume-e7d18b4a-3d6e-4892-bc16-eb5c3e953f92" satisfied condition "success or failure"
Feb 17 14:55:03.340: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-e7d18b4a-3d6e-4892-bc16-eb5c3e953f92 container client-container: 
STEP: delete the pod
Feb 17 14:55:03.414: INFO: Waiting for pod downwardapi-volume-e7d18b4a-3d6e-4892-bc16-eb5c3e953f92 to disappear
Feb 17 14:55:03.430: INFO: Pod downwardapi-volume-e7d18b4a-3d6e-4892-bc16-eb5c3e953f92 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:55:03.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7567" for this suite.
Feb 17 14:55:09.538: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:55:09.675: INFO: namespace downward-api-7567 deletion completed in 6.237598585s

• [SLOW TEST:16.524 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:55:09.675: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:55:09.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9019" for this suite.
Feb 17 14:55:15.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:55:15.969: INFO: namespace services-9019 deletion completed in 6.137865239s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.295 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:55:15.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Feb 17 14:55:16.072: INFO: Waiting up to 5m0s for pod "var-expansion-da930a7a-5977-4901-8f45-30ea34f13356" in namespace "var-expansion-8584" to be "success or failure"
Feb 17 14:55:16.096: INFO: Pod "var-expansion-da930a7a-5977-4901-8f45-30ea34f13356": Phase="Pending", Reason="", readiness=false. Elapsed: 24.391236ms
Feb 17 14:55:18.103: INFO: Pod "var-expansion-da930a7a-5977-4901-8f45-30ea34f13356": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031668673s
Feb 17 14:55:20.149: INFO: Pod "var-expansion-da930a7a-5977-4901-8f45-30ea34f13356": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076921568s
Feb 17 14:55:22.164: INFO: Pod "var-expansion-da930a7a-5977-4901-8f45-30ea34f13356": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092189644s
Feb 17 14:55:24.170: INFO: Pod "var-expansion-da930a7a-5977-4901-8f45-30ea34f13356": Phase="Pending", Reason="", readiness=false. Elapsed: 8.098517509s
Feb 17 14:55:26.177: INFO: Pod "var-expansion-da930a7a-5977-4901-8f45-30ea34f13356": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.105663488s
STEP: Saw pod success
Feb 17 14:55:26.177: INFO: Pod "var-expansion-da930a7a-5977-4901-8f45-30ea34f13356" satisfied condition "success or failure"
Feb 17 14:55:26.181: INFO: Trying to get logs from node iruya-node pod var-expansion-da930a7a-5977-4901-8f45-30ea34f13356 container dapi-container: 
STEP: delete the pod
Feb 17 14:55:26.232: INFO: Waiting for pod var-expansion-da930a7a-5977-4901-8f45-30ea34f13356 to disappear
Feb 17 14:55:26.238: INFO: Pod var-expansion-da930a7a-5977-4901-8f45-30ea34f13356 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:55:26.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-8584" for this suite.
Feb 17 14:55:32.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:55:32.358: INFO: namespace var-expansion-8584 deletion completed in 6.112360367s

• [SLOW TEST:16.388 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:55:32.359: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Feb 17 14:55:32.502: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-755,SelfLink:/api/v1/namespaces/watch-755/configmaps/e2e-watch-test-watch-closed,UID:1f7a527b-53f4-4cad-afd8-4f3fc09e722a,ResourceVersion:24712882,Generation:0,CreationTimestamp:2020-02-17 14:55:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 17 14:55:32.502: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-755,SelfLink:/api/v1/namespaces/watch-755/configmaps/e2e-watch-test-watch-closed,UID:1f7a527b-53f4-4cad-afd8-4f3fc09e722a,ResourceVersion:24712883,Generation:0,CreationTimestamp:2020-02-17 14:55:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Feb 17 14:55:32.541: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-755,SelfLink:/api/v1/namespaces/watch-755/configmaps/e2e-watch-test-watch-closed,UID:1f7a527b-53f4-4cad-afd8-4f3fc09e722a,ResourceVersion:24712884,Generation:0,CreationTimestamp:2020-02-17 14:55:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 17 14:55:32.541: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-755,SelfLink:/api/v1/namespaces/watch-755/configmaps/e2e-watch-test-watch-closed,UID:1f7a527b-53f4-4cad-afd8-4f3fc09e722a,ResourceVersion:24712885,Generation:0,CreationTimestamp:2020-02-17 14:55:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:55:32.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-755" for this suite.
Feb 17 14:55:38.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:55:38.742: INFO: namespace watch-755 deletion completed in 6.188539169s

• [SLOW TEST:6.383 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:55:38.743: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7203.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7203.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7203.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7203.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 17 14:55:50.958: INFO: File wheezy_udp@dns-test-service-3.dns-7203.svc.cluster.local from pod  dns-7203/dns-test-dcc46995-9f85-4693-a799-3fb2c3dca89d contains '' instead of 'foo.example.com.'
Feb 17 14:55:50.966: INFO: File jessie_udp@dns-test-service-3.dns-7203.svc.cluster.local from pod  dns-7203/dns-test-dcc46995-9f85-4693-a799-3fb2c3dca89d contains '' instead of 'foo.example.com.'
Feb 17 14:55:50.966: INFO: Lookups using dns-7203/dns-test-dcc46995-9f85-4693-a799-3fb2c3dca89d failed for: [wheezy_udp@dns-test-service-3.dns-7203.svc.cluster.local jessie_udp@dns-test-service-3.dns-7203.svc.cluster.local]

Feb 17 14:55:55.985: INFO: DNS probes using dns-test-dcc46995-9f85-4693-a799-3fb2c3dca89d succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7203.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7203.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7203.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7203.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 17 14:56:12.256: INFO: File wheezy_udp@dns-test-service-3.dns-7203.svc.cluster.local from pod  dns-7203/dns-test-1095a85d-409f-4f37-b47c-c1ab8fa8529c contains '' instead of 'bar.example.com.'
Feb 17 14:56:12.261: INFO: File jessie_udp@dns-test-service-3.dns-7203.svc.cluster.local from pod  dns-7203/dns-test-1095a85d-409f-4f37-b47c-c1ab8fa8529c contains '' instead of 'bar.example.com.'
Feb 17 14:56:12.261: INFO: Lookups using dns-7203/dns-test-1095a85d-409f-4f37-b47c-c1ab8fa8529c failed for: [wheezy_udp@dns-test-service-3.dns-7203.svc.cluster.local jessie_udp@dns-test-service-3.dns-7203.svc.cluster.local]

Feb 17 14:56:17.277: INFO: File wheezy_udp@dns-test-service-3.dns-7203.svc.cluster.local from pod  dns-7203/dns-test-1095a85d-409f-4f37-b47c-c1ab8fa8529c contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 17 14:56:17.284: INFO: File jessie_udp@dns-test-service-3.dns-7203.svc.cluster.local from pod  dns-7203/dns-test-1095a85d-409f-4f37-b47c-c1ab8fa8529c contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 17 14:56:17.284: INFO: Lookups using dns-7203/dns-test-1095a85d-409f-4f37-b47c-c1ab8fa8529c failed for: [wheezy_udp@dns-test-service-3.dns-7203.svc.cluster.local jessie_udp@dns-test-service-3.dns-7203.svc.cluster.local]

Feb 17 14:56:22.270: INFO: File wheezy_udp@dns-test-service-3.dns-7203.svc.cluster.local from pod  dns-7203/dns-test-1095a85d-409f-4f37-b47c-c1ab8fa8529c contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 17 14:56:22.275: INFO: File jessie_udp@dns-test-service-3.dns-7203.svc.cluster.local from pod  dns-7203/dns-test-1095a85d-409f-4f37-b47c-c1ab8fa8529c contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 17 14:56:22.275: INFO: Lookups using dns-7203/dns-test-1095a85d-409f-4f37-b47c-c1ab8fa8529c failed for: [wheezy_udp@dns-test-service-3.dns-7203.svc.cluster.local jessie_udp@dns-test-service-3.dns-7203.svc.cluster.local]

Feb 17 14:56:27.282: INFO: DNS probes using dns-test-1095a85d-409f-4f37-b47c-c1ab8fa8529c succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7203.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7203.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7203.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-7203.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 17 14:56:43.542: INFO: File wheezy_udp@dns-test-service-3.dns-7203.svc.cluster.local from pod  dns-7203/dns-test-986ea6c4-3a6f-4bcf-bd97-dda6c4ff3792 contains '' instead of '10.105.54.192'
Feb 17 14:56:43.550: INFO: File jessie_udp@dns-test-service-3.dns-7203.svc.cluster.local from pod  dns-7203/dns-test-986ea6c4-3a6f-4bcf-bd97-dda6c4ff3792 contains '' instead of '10.105.54.192'
Feb 17 14:56:43.550: INFO: Lookups using dns-7203/dns-test-986ea6c4-3a6f-4bcf-bd97-dda6c4ff3792 failed for: [wheezy_udp@dns-test-service-3.dns-7203.svc.cluster.local jessie_udp@dns-test-service-3.dns-7203.svc.cluster.local]

Feb 17 14:56:48.579: INFO: DNS probes using dns-test-986ea6c4-3a6f-4bcf-bd97-dda6c4ff3792 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:56:48.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7203" for this suite.
Feb 17 14:56:54.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:56:55.016: INFO: namespace dns-7203 deletion completed in 6.275212341s

• [SLOW TEST:76.274 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:56:55.016: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Feb 17 14:56:55.165: INFO: Waiting up to 5m0s for pod "client-containers-7483150b-38f8-40fa-b654-d76cf653aceb" in namespace "containers-5485" to be "success or failure"
Feb 17 14:56:55.169: INFO: Pod "client-containers-7483150b-38f8-40fa-b654-d76cf653aceb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.860227ms
Feb 17 14:56:57.178: INFO: Pod "client-containers-7483150b-38f8-40fa-b654-d76cf653aceb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013621102s
Feb 17 14:56:59.194: INFO: Pod "client-containers-7483150b-38f8-40fa-b654-d76cf653aceb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029198853s
Feb 17 14:57:01.206: INFO: Pod "client-containers-7483150b-38f8-40fa-b654-d76cf653aceb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04093881s
Feb 17 14:57:03.215: INFO: Pod "client-containers-7483150b-38f8-40fa-b654-d76cf653aceb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050571324s
Feb 17 14:57:05.220: INFO: Pod "client-containers-7483150b-38f8-40fa-b654-d76cf653aceb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.055701832s
STEP: Saw pod success
Feb 17 14:57:05.220: INFO: Pod "client-containers-7483150b-38f8-40fa-b654-d76cf653aceb" satisfied condition "success or failure"
Feb 17 14:57:05.224: INFO: Trying to get logs from node iruya-node pod client-containers-7483150b-38f8-40fa-b654-d76cf653aceb container test-container: 
STEP: delete the pod
Feb 17 14:57:05.260: INFO: Waiting for pod client-containers-7483150b-38f8-40fa-b654-d76cf653aceb to disappear
Feb 17 14:57:05.268: INFO: Pod client-containers-7483150b-38f8-40fa-b654-d76cf653aceb no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:57:05.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-5485" for this suite.
Feb 17 14:57:11.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:57:11.572: INFO: namespace containers-5485 deletion completed in 6.29942266s

• [SLOW TEST:16.556 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:57:11.573: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-7d2e4524-9c9d-4258-ae41-739465d2e81a
STEP: Creating a pod to test consume configMaps
Feb 17 14:57:11.675: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3ee58a4b-4313-4e3f-8ee7-da698286a457" in namespace "projected-525" to be "success or failure"
Feb 17 14:57:11.720: INFO: Pod "pod-projected-configmaps-3ee58a4b-4313-4e3f-8ee7-da698286a457": Phase="Pending", Reason="", readiness=false. Elapsed: 44.731566ms
Feb 17 14:57:13.736: INFO: Pod "pod-projected-configmaps-3ee58a4b-4313-4e3f-8ee7-da698286a457": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060567722s
Feb 17 14:57:15.745: INFO: Pod "pod-projected-configmaps-3ee58a4b-4313-4e3f-8ee7-da698286a457": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069902698s
Feb 17 14:57:17.754: INFO: Pod "pod-projected-configmaps-3ee58a4b-4313-4e3f-8ee7-da698286a457": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078141582s
Feb 17 14:57:19.760: INFO: Pod "pod-projected-configmaps-3ee58a4b-4313-4e3f-8ee7-da698286a457": Phase="Pending", Reason="", readiness=false. Elapsed: 8.085125937s
Feb 17 14:57:21.772: INFO: Pod "pod-projected-configmaps-3ee58a4b-4313-4e3f-8ee7-da698286a457": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.096672379s
STEP: Saw pod success
Feb 17 14:57:21.772: INFO: Pod "pod-projected-configmaps-3ee58a4b-4313-4e3f-8ee7-da698286a457" satisfied condition "success or failure"
Feb 17 14:57:21.812: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-3ee58a4b-4313-4e3f-8ee7-da698286a457 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 17 14:57:21.993: INFO: Waiting for pod pod-projected-configmaps-3ee58a4b-4313-4e3f-8ee7-da698286a457 to disappear
Feb 17 14:57:22.002: INFO: Pod pod-projected-configmaps-3ee58a4b-4313-4e3f-8ee7-da698286a457 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:57:22.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-525" for this suite.
Feb 17 14:57:28.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:57:28.180: INFO: namespace projected-525 deletion completed in 6.174816542s

• [SLOW TEST:16.607 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:57:28.181: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 17 14:57:28.278: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:57:29.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-945" for this suite.
Feb 17 14:57:35.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:57:35.817: INFO: namespace custom-resource-definition-945 deletion completed in 6.337460504s

• [SLOW TEST:7.636 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:57:35.818: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-5625/configmap-test-4db6da3f-8fc4-4218-9639-ae60b7dc27dc
STEP: Creating a pod to test consume configMaps
Feb 17 14:57:35.952: INFO: Waiting up to 5m0s for pod "pod-configmaps-ae3add2d-e081-4474-a367-5fac50bab7bb" in namespace "configmap-5625" to be "success or failure"
Feb 17 14:57:35.985: INFO: Pod "pod-configmaps-ae3add2d-e081-4474-a367-5fac50bab7bb": Phase="Pending", Reason="", readiness=false. Elapsed: 33.142356ms
Feb 17 14:57:37.996: INFO: Pod "pod-configmaps-ae3add2d-e081-4474-a367-5fac50bab7bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0433229s
Feb 17 14:57:40.009: INFO: Pod "pod-configmaps-ae3add2d-e081-4474-a367-5fac50bab7bb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056897278s
Feb 17 14:57:42.021: INFO: Pod "pod-configmaps-ae3add2d-e081-4474-a367-5fac50bab7bb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069117106s
Feb 17 14:57:44.031: INFO: Pod "pod-configmaps-ae3add2d-e081-4474-a367-5fac50bab7bb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.079141753s
Feb 17 14:57:46.038: INFO: Pod "pod-configmaps-ae3add2d-e081-4474-a367-5fac50bab7bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.085681482s
STEP: Saw pod success
Feb 17 14:57:46.038: INFO: Pod "pod-configmaps-ae3add2d-e081-4474-a367-5fac50bab7bb" satisfied condition "success or failure"
Feb 17 14:57:46.041: INFO: Trying to get logs from node iruya-node pod pod-configmaps-ae3add2d-e081-4474-a367-5fac50bab7bb container env-test: 
STEP: delete the pod
Feb 17 14:57:46.083: INFO: Waiting for pod pod-configmaps-ae3add2d-e081-4474-a367-5fac50bab7bb to disappear
Feb 17 14:57:46.100: INFO: Pod pod-configmaps-ae3add2d-e081-4474-a367-5fac50bab7bb no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:57:46.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5625" for this suite.
Feb 17 14:57:52.123: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:57:52.220: INFO: namespace configmap-5625 deletion completed in 6.115507358s

• [SLOW TEST:16.402 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:57:52.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb 17 14:57:52.343: INFO: Waiting up to 5m0s for pod "downward-api-9fd499c5-266c-47a3-bd89-cd3df2191d20" in namespace "downward-api-8867" to be "success or failure"
Feb 17 14:57:52.357: INFO: Pod "downward-api-9fd499c5-266c-47a3-bd89-cd3df2191d20": Phase="Pending", Reason="", readiness=false. Elapsed: 14.316287ms
Feb 17 14:57:54.367: INFO: Pod "downward-api-9fd499c5-266c-47a3-bd89-cd3df2191d20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023773114s
Feb 17 14:57:56.376: INFO: Pod "downward-api-9fd499c5-266c-47a3-bd89-cd3df2191d20": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032996288s
Feb 17 14:57:58.650: INFO: Pod "downward-api-9fd499c5-266c-47a3-bd89-cd3df2191d20": Phase="Pending", Reason="", readiness=false. Elapsed: 6.307101441s
Feb 17 14:58:00.667: INFO: Pod "downward-api-9fd499c5-266c-47a3-bd89-cd3df2191d20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.324394339s
STEP: Saw pod success
Feb 17 14:58:00.667: INFO: Pod "downward-api-9fd499c5-266c-47a3-bd89-cd3df2191d20" satisfied condition "success or failure"
Feb 17 14:58:00.672: INFO: Trying to get logs from node iruya-node pod downward-api-9fd499c5-266c-47a3-bd89-cd3df2191d20 container dapi-container: 
STEP: delete the pod
Feb 17 14:58:00.824: INFO: Waiting for pod downward-api-9fd499c5-266c-47a3-bd89-cd3df2191d20 to disappear
Feb 17 14:58:00.860: INFO: Pod downward-api-9fd499c5-266c-47a3-bd89-cd3df2191d20 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:58:00.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8867" for this suite.
Feb 17 14:58:06.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:58:06.985: INFO: namespace downward-api-8867 deletion completed in 6.109359867s

• [SLOW TEST:14.765 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:58:06.986: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-f5bee80b-f4ed-4e02-9d8e-39b629e9edd7
STEP: Creating a pod to test consume secrets
Feb 17 14:58:07.347: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-83c86ce4-157e-42d8-9d71-d53ffba01aba" in namespace "projected-6991" to be "success or failure"
Feb 17 14:58:07.359: INFO: Pod "pod-projected-secrets-83c86ce4-157e-42d8-9d71-d53ffba01aba": Phase="Pending", Reason="", readiness=false. Elapsed: 11.816734ms
Feb 17 14:58:09.370: INFO: Pod "pod-projected-secrets-83c86ce4-157e-42d8-9d71-d53ffba01aba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023280286s
Feb 17 14:58:11.379: INFO: Pod "pod-projected-secrets-83c86ce4-157e-42d8-9d71-d53ffba01aba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031925918s
Feb 17 14:58:13.388: INFO: Pod "pod-projected-secrets-83c86ce4-157e-42d8-9d71-d53ffba01aba": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040919058s
Feb 17 14:58:15.399: INFO: Pod "pod-projected-secrets-83c86ce4-157e-42d8-9d71-d53ffba01aba": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052197548s
Feb 17 14:58:17.409: INFO: Pod "pod-projected-secrets-83c86ce4-157e-42d8-9d71-d53ffba01aba": Phase="Pending", Reason="", readiness=false. Elapsed: 10.062334535s
Feb 17 14:58:19.417: INFO: Pod "pod-projected-secrets-83c86ce4-157e-42d8-9d71-d53ffba01aba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.070001853s
STEP: Saw pod success
Feb 17 14:58:19.417: INFO: Pod "pod-projected-secrets-83c86ce4-157e-42d8-9d71-d53ffba01aba" satisfied condition "success or failure"
Feb 17 14:58:19.420: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-83c86ce4-157e-42d8-9d71-d53ffba01aba container projected-secret-volume-test: 
STEP: delete the pod
Feb 17 14:58:19.472: INFO: Waiting for pod pod-projected-secrets-83c86ce4-157e-42d8-9d71-d53ffba01aba to disappear
Feb 17 14:58:19.551: INFO: Pod pod-projected-secrets-83c86ce4-157e-42d8-9d71-d53ffba01aba no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:58:19.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6991" for this suite.
Feb 17 14:58:25.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:58:25.806: INFO: namespace projected-6991 deletion completed in 6.245347623s

• [SLOW TEST:18.821 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:58:25.809: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Feb 17 14:58:33.965: INFO: Pod pod-hostip-aa7bb28b-fc7d-4c26-b94e-aabfc4314a7d has hostIP: 10.96.3.65
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:58:33.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6225" for this suite.
Feb 17 14:58:56.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:58:56.098: INFO: namespace pods-6225 deletion completed in 22.129176356s

• [SLOW TEST:30.290 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:58:56.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 17 14:58:56.200: INFO: Waiting up to 5m0s for pod "pod-96a1ec11-4216-4dea-ac6c-6f728eac1557" in namespace "emptydir-3927" to be "success or failure"
Feb 17 14:58:56.254: INFO: Pod "pod-96a1ec11-4216-4dea-ac6c-6f728eac1557": Phase="Pending", Reason="", readiness=false. Elapsed: 54.019662ms
Feb 17 14:58:58.262: INFO: Pod "pod-96a1ec11-4216-4dea-ac6c-6f728eac1557": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061750529s
Feb 17 14:59:00.269: INFO: Pod "pod-96a1ec11-4216-4dea-ac6c-6f728eac1557": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069013487s
Feb 17 14:59:02.284: INFO: Pod "pod-96a1ec11-4216-4dea-ac6c-6f728eac1557": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084000284s
Feb 17 14:59:04.299: INFO: Pod "pod-96a1ec11-4216-4dea-ac6c-6f728eac1557": Phase="Pending", Reason="", readiness=false. Elapsed: 8.098721843s
Feb 17 14:59:06.310: INFO: Pod "pod-96a1ec11-4216-4dea-ac6c-6f728eac1557": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.110281799s
STEP: Saw pod success
Feb 17 14:59:06.311: INFO: Pod "pod-96a1ec11-4216-4dea-ac6c-6f728eac1557" satisfied condition "success or failure"
Feb 17 14:59:06.316: INFO: Trying to get logs from node iruya-node pod pod-96a1ec11-4216-4dea-ac6c-6f728eac1557 container test-container: 
STEP: delete the pod
Feb 17 14:59:06.385: INFO: Waiting for pod pod-96a1ec11-4216-4dea-ac6c-6f728eac1557 to disappear
Feb 17 14:59:06.475: INFO: Pod pod-96a1ec11-4216-4dea-ac6c-6f728eac1557 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 14:59:06.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3927" for this suite.
Feb 17 14:59:12.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 14:59:12.712: INFO: namespace emptydir-3927 deletion completed in 6.224261189s

• [SLOW TEST:16.614 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 14:59:12.714: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-6137, will wait for the garbage collector to delete the pods
Feb 17 14:59:24.916: INFO: Deleting Job.batch foo took: 19.607344ms
Feb 17 14:59:25.216: INFO: Terminating Job.batch foo pods took: 300.538454ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 15:00:06.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-6137" for this suite.
Feb 17 15:00:12.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 15:00:12.841: INFO: namespace job-6137 deletion completed in 6.21487337s

• [SLOW TEST:60.127 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 15:00:12.842: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-6113/secret-test-fe3b2c3a-56d0-44e3-908d-2b302f4d75e6
STEP: Creating a pod to test consume secrets
Feb 17 15:00:13.081: INFO: Waiting up to 5m0s for pod "pod-configmaps-e79fd8fc-270a-4ce0-996f-072ea8ebf99a" in namespace "secrets-6113" to be "success or failure"
Feb 17 15:00:13.117: INFO: Pod "pod-configmaps-e79fd8fc-270a-4ce0-996f-072ea8ebf99a": Phase="Pending", Reason="", readiness=false. Elapsed: 36.158548ms
Feb 17 15:00:15.126: INFO: Pod "pod-configmaps-e79fd8fc-270a-4ce0-996f-072ea8ebf99a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044767366s
Feb 17 15:00:17.136: INFO: Pod "pod-configmaps-e79fd8fc-270a-4ce0-996f-072ea8ebf99a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055076307s
Feb 17 15:00:19.145: INFO: Pod "pod-configmaps-e79fd8fc-270a-4ce0-996f-072ea8ebf99a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063765151s
Feb 17 15:00:21.159: INFO: Pod "pod-configmaps-e79fd8fc-270a-4ce0-996f-072ea8ebf99a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.077644698s
Feb 17 15:00:23.173: INFO: Pod "pod-configmaps-e79fd8fc-270a-4ce0-996f-072ea8ebf99a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.091602958s
Feb 17 15:00:25.178: INFO: Pod "pod-configmaps-e79fd8fc-270a-4ce0-996f-072ea8ebf99a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.097404082s
STEP: Saw pod success
Feb 17 15:00:25.178: INFO: Pod "pod-configmaps-e79fd8fc-270a-4ce0-996f-072ea8ebf99a" satisfied condition "success or failure"
Feb 17 15:00:25.181: INFO: Trying to get logs from node iruya-node pod pod-configmaps-e79fd8fc-270a-4ce0-996f-072ea8ebf99a container env-test: 
STEP: delete the pod
Feb 17 15:00:25.226: INFO: Waiting for pod pod-configmaps-e79fd8fc-270a-4ce0-996f-072ea8ebf99a to disappear
Feb 17 15:00:25.234: INFO: Pod pod-configmaps-e79fd8fc-270a-4ce0-996f-072ea8ebf99a no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 15:00:25.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6113" for this suite.
Feb 17 15:00:31.268: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 15:00:31.388: INFO: namespace secrets-6113 deletion completed in 6.149164019s

• [SLOW TEST:18.546 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 15:00:31.388: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 17 15:00:31.504: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bf242a79-9c9c-476b-87ec-773ac5b4992a" in namespace "downward-api-7986" to be "success or failure"
Feb 17 15:00:31.514: INFO: Pod "downwardapi-volume-bf242a79-9c9c-476b-87ec-773ac5b4992a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.218926ms
Feb 17 15:00:33.522: INFO: Pod "downwardapi-volume-bf242a79-9c9c-476b-87ec-773ac5b4992a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017831222s
Feb 17 15:00:35.543: INFO: Pod "downwardapi-volume-bf242a79-9c9c-476b-87ec-773ac5b4992a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038215345s
Feb 17 15:00:37.549: INFO: Pod "downwardapi-volume-bf242a79-9c9c-476b-87ec-773ac5b4992a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045013732s
Feb 17 15:00:39.555: INFO: Pod "downwardapi-volume-bf242a79-9c9c-476b-87ec-773ac5b4992a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05063039s
Feb 17 15:00:41.612: INFO: Pod "downwardapi-volume-bf242a79-9c9c-476b-87ec-773ac5b4992a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.107830466s
STEP: Saw pod success
Feb 17 15:00:41.612: INFO: Pod "downwardapi-volume-bf242a79-9c9c-476b-87ec-773ac5b4992a" satisfied condition "success or failure"
Feb 17 15:00:41.617: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-bf242a79-9c9c-476b-87ec-773ac5b4992a container client-container: 
STEP: delete the pod
Feb 17 15:00:41.689: INFO: Waiting for pod downwardapi-volume-bf242a79-9c9c-476b-87ec-773ac5b4992a to disappear
Feb 17 15:00:41.694: INFO: Pod downwardapi-volume-bf242a79-9c9c-476b-87ec-773ac5b4992a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 15:00:41.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7986" for this suite.
Feb 17 15:00:47.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 15:00:47.979: INFO: namespace downward-api-7986 deletion completed in 6.280065988s

• [SLOW TEST:16.591 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 15:00:47.980: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Feb 17 15:00:58.125: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-c1baead0-0da4-4b24-b258-2d07ed68ab0c,GenerateName:,Namespace:events-3976,SelfLink:/api/v1/namespaces/events-3976/pods/send-events-c1baead0-0da4-4b24-b258-2d07ed68ab0c,UID:8c22eb0c-ca48-49ce-aac8-88a8d41354c1,ResourceVersion:24713731,Generation:0,CreationTimestamp:2020-02-17 15:00:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 79995197,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pk8w8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pk8w8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-pk8w8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0009f66c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0009f66e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 15:00:48 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 15:00:56 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 15:00:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 15:00:48 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-17 15:00:48 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-02-17 15:00:55 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://861b2651240cacaf2123dfe20e18f963ddb53b893700ae3b648d2bd9e2cae600}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Feb 17 15:01:00.137: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Feb 17 15:01:02.155: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 15:01:02.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-3976" for this suite.
Feb 17 15:01:48.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 15:01:48.437: INFO: namespace events-3976 deletion completed in 46.238485238s

• [SLOW TEST:60.457 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 15:01:48.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 17 15:01:49.211: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8f6a58bb-f119-427e-a0f0-21102299f325" in namespace "downward-api-1838" to be "success or failure"
Feb 17 15:01:49.219: INFO: Pod "downwardapi-volume-8f6a58bb-f119-427e-a0f0-21102299f325": Phase="Pending", Reason="", readiness=false. Elapsed: 7.80514ms
Feb 17 15:01:51.226: INFO: Pod "downwardapi-volume-8f6a58bb-f119-427e-a0f0-21102299f325": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014815693s
Feb 17 15:01:53.234: INFO: Pod "downwardapi-volume-8f6a58bb-f119-427e-a0f0-21102299f325": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022691862s
Feb 17 15:01:55.248: INFO: Pod "downwardapi-volume-8f6a58bb-f119-427e-a0f0-21102299f325": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036760065s
Feb 17 15:01:57.256: INFO: Pod "downwardapi-volume-8f6a58bb-f119-427e-a0f0-21102299f325": Phase="Pending", Reason="", readiness=false. Elapsed: 8.044736399s
Feb 17 15:01:59.266: INFO: Pod "downwardapi-volume-8f6a58bb-f119-427e-a0f0-21102299f325": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.055087817s
STEP: Saw pod success
Feb 17 15:01:59.266: INFO: Pod "downwardapi-volume-8f6a58bb-f119-427e-a0f0-21102299f325" satisfied condition "success or failure"
Feb 17 15:01:59.271: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-8f6a58bb-f119-427e-a0f0-21102299f325 container client-container: 
STEP: delete the pod
Feb 17 15:01:59.605: INFO: Waiting for pod downwardapi-volume-8f6a58bb-f119-427e-a0f0-21102299f325 to disappear
Feb 17 15:01:59.612: INFO: Pod downwardapi-volume-8f6a58bb-f119-427e-a0f0-21102299f325 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 15:01:59.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1838" for this suite.
Feb 17 15:02:05.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 15:02:05.889: INFO: namespace downward-api-1838 deletion completed in 6.26711779s

• [SLOW TEST:17.452 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 15:02:05.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 17 15:02:06.021: INFO: Waiting up to 5m0s for pod "downwardapi-volume-241189b0-a4e1-401e-97ce-cd7732db688e" in namespace "projected-1834" to be "success or failure"
Feb 17 15:02:06.025: INFO: Pod "downwardapi-volume-241189b0-a4e1-401e-97ce-cd7732db688e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.236869ms
Feb 17 15:02:08.040: INFO: Pod "downwardapi-volume-241189b0-a4e1-401e-97ce-cd7732db688e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018315866s
Feb 17 15:02:10.047: INFO: Pod "downwardapi-volume-241189b0-a4e1-401e-97ce-cd7732db688e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025808011s
Feb 17 15:02:12.057: INFO: Pod "downwardapi-volume-241189b0-a4e1-401e-97ce-cd7732db688e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035984585s
Feb 17 15:02:14.069: INFO: Pod "downwardapi-volume-241189b0-a4e1-401e-97ce-cd7732db688e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047963991s
Feb 17 15:02:16.076: INFO: Pod "downwardapi-volume-241189b0-a4e1-401e-97ce-cd7732db688e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.054514063s
Feb 17 15:02:18.089: INFO: Pod "downwardapi-volume-241189b0-a4e1-401e-97ce-cd7732db688e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.067726654s
Feb 17 15:02:20.105: INFO: Pod "downwardapi-volume-241189b0-a4e1-401e-97ce-cd7732db688e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.083722516s
STEP: Saw pod success
Feb 17 15:02:20.105: INFO: Pod "downwardapi-volume-241189b0-a4e1-401e-97ce-cd7732db688e" satisfied condition "success or failure"
Feb 17 15:02:20.120: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-241189b0-a4e1-401e-97ce-cd7732db688e container client-container: 
STEP: delete the pod
Feb 17 15:02:20.238: INFO: Waiting for pod downwardapi-volume-241189b0-a4e1-401e-97ce-cd7732db688e to disappear
Feb 17 15:02:20.245: INFO: Pod downwardapi-volume-241189b0-a4e1-401e-97ce-cd7732db688e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 15:02:20.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1834" for this suite.
Feb 17 15:02:26.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 15:02:26.386: INFO: namespace projected-1834 deletion completed in 6.134509199s

• [SLOW TEST:20.497 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 15:02:26.386: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Feb 17 15:02:26.458: INFO: Waiting up to 5m0s for pod "pod-e2b05ebb-bedb-4a42-9014-1e69db7a481d" in namespace "emptydir-9631" to be "success or failure"
Feb 17 15:02:26.470: INFO: Pod "pod-e2b05ebb-bedb-4a42-9014-1e69db7a481d": Phase="Pending", Reason="", readiness=false. Elapsed: 11.558179ms
Feb 17 15:02:28.504: INFO: Pod "pod-e2b05ebb-bedb-4a42-9014-1e69db7a481d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045575609s
Feb 17 15:02:30.521: INFO: Pod "pod-e2b05ebb-bedb-4a42-9014-1e69db7a481d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062842506s
Feb 17 15:02:32.539: INFO: Pod "pod-e2b05ebb-bedb-4a42-9014-1e69db7a481d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080797932s
Feb 17 15:02:34.552: INFO: Pod "pod-e2b05ebb-bedb-4a42-9014-1e69db7a481d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.093301385s
Feb 17 15:02:36.569: INFO: Pod "pod-e2b05ebb-bedb-4a42-9014-1e69db7a481d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.110285751s
STEP: Saw pod success
Feb 17 15:02:36.569: INFO: Pod "pod-e2b05ebb-bedb-4a42-9014-1e69db7a481d" satisfied condition "success or failure"
Feb 17 15:02:36.574: INFO: Trying to get logs from node iruya-node pod pod-e2b05ebb-bedb-4a42-9014-1e69db7a481d container test-container: 
STEP: delete the pod
Feb 17 15:02:36.639: INFO: Waiting for pod pod-e2b05ebb-bedb-4a42-9014-1e69db7a481d to disappear
Feb 17 15:02:36.666: INFO: Pod pod-e2b05ebb-bedb-4a42-9014-1e69db7a481d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 15:02:36.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9631" for this suite.
Feb 17 15:02:42.726: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 15:02:42.837: INFO: namespace emptydir-9631 deletion completed in 6.16221968s

• [SLOW TEST:16.450 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 15:02:42.838: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Feb 17 15:02:42.968: INFO: Waiting up to 5m0s for pod "client-containers-d71ad08a-42dd-4640-9e2e-5917a2cce02e" in namespace "containers-2702" to be "success or failure"
Feb 17 15:02:42.982: INFO: Pod "client-containers-d71ad08a-42dd-4640-9e2e-5917a2cce02e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.068399ms
Feb 17 15:02:44.990: INFO: Pod "client-containers-d71ad08a-42dd-4640-9e2e-5917a2cce02e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021744922s
Feb 17 15:02:46.998: INFO: Pod "client-containers-d71ad08a-42dd-4640-9e2e-5917a2cce02e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030002287s
Feb 17 15:02:49.009: INFO: Pod "client-containers-d71ad08a-42dd-4640-9e2e-5917a2cce02e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041071677s
Feb 17 15:02:51.033: INFO: Pod "client-containers-d71ad08a-42dd-4640-9e2e-5917a2cce02e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.06554537s
STEP: Saw pod success
Feb 17 15:02:51.033: INFO: Pod "client-containers-d71ad08a-42dd-4640-9e2e-5917a2cce02e" satisfied condition "success or failure"
Feb 17 15:02:51.038: INFO: Trying to get logs from node iruya-node pod client-containers-d71ad08a-42dd-4640-9e2e-5917a2cce02e container test-container: 
STEP: delete the pod
Feb 17 15:02:51.125: INFO: Waiting for pod client-containers-d71ad08a-42dd-4640-9e2e-5917a2cce02e to disappear
Feb 17 15:02:51.128: INFO: Pod client-containers-d71ad08a-42dd-4640-9e2e-5917a2cce02e no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 15:02:51.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2702" for this suite.
Feb 17 15:02:57.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 15:02:57.307: INFO: namespace containers-2702 deletion completed in 6.130383509s

• [SLOW TEST:14.469 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 15:02:57.308: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 17 15:03:15.556: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 17 15:03:15.566: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 17 15:03:17.567: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 17 15:03:17.575: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 17 15:03:19.567: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 17 15:03:19.573: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 17 15:03:21.567: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 17 15:03:21.575: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 17 15:03:23.567: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 17 15:03:23.576: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 17 15:03:25.567: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 17 15:03:25.577: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 17 15:03:27.567: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 17 15:03:27.575: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 17 15:03:29.567: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 17 15:03:29.632: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 17 15:03:31.567: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 17 15:03:31.576: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 17 15:03:33.567: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 17 15:03:33.578: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 17 15:03:35.567: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 17 15:03:35.580: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 17 15:03:37.567: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 17 15:03:37.574: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 17 15:03:39.567: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 17 15:03:39.575: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 17 15:03:41.567: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 17 15:03:41.581: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 17 15:03:43.567: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 17 15:03:43.573: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 17 15:03:45.567: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 17 15:03:45.577: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 17 15:03:47.567: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 17 15:03:47.596: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 15:03:47.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4397" for this suite.
Feb 17 15:04:09.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 15:04:09.837: INFO: namespace container-lifecycle-hook-4397 deletion completed in 22.200999121s

• [SLOW TEST:72.530 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 15:04:09.837: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 15:04:10.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1806" for this suite.
Feb 17 15:04:16.045: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 15:04:16.122: INFO: namespace kubelet-test-1806 deletion completed in 6.101587567s

• [SLOW TEST:6.285 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 15:04:16.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-hq55
STEP: Creating a pod to test atomic-volume-subpath
Feb 17 15:04:16.758: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-hq55" in namespace "subpath-6430" to be "success or failure"
Feb 17 15:04:16.864: INFO: Pod "pod-subpath-test-downwardapi-hq55": Phase="Pending", Reason="", readiness=false. Elapsed: 106.475997ms
Feb 17 15:04:18.878: INFO: Pod "pod-subpath-test-downwardapi-hq55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120685627s
Feb 17 15:04:20.893: INFO: Pod "pod-subpath-test-downwardapi-hq55": Phase="Pending", Reason="", readiness=false. Elapsed: 4.135184889s
Feb 17 15:04:22.921: INFO: Pod "pod-subpath-test-downwardapi-hq55": Phase="Pending", Reason="", readiness=false. Elapsed: 6.163525309s
Feb 17 15:04:24.930: INFO: Pod "pod-subpath-test-downwardapi-hq55": Phase="Pending", Reason="", readiness=false. Elapsed: 8.172444593s
Feb 17 15:04:27.024: INFO: Pod "pod-subpath-test-downwardapi-hq55": Phase="Running", Reason="", readiness=true. Elapsed: 10.26587226s
Feb 17 15:04:29.033: INFO: Pod "pod-subpath-test-downwardapi-hq55": Phase="Running", Reason="", readiness=true. Elapsed: 12.27488207s
Feb 17 15:04:31.039: INFO: Pod "pod-subpath-test-downwardapi-hq55": Phase="Running", Reason="", readiness=true. Elapsed: 14.281641548s
Feb 17 15:04:33.048: INFO: Pod "pod-subpath-test-downwardapi-hq55": Phase="Running", Reason="", readiness=true. Elapsed: 16.29074662s
Feb 17 15:04:35.060: INFO: Pod "pod-subpath-test-downwardapi-hq55": Phase="Running", Reason="", readiness=true. Elapsed: 18.302001498s
Feb 17 15:04:37.073: INFO: Pod "pod-subpath-test-downwardapi-hq55": Phase="Running", Reason="", readiness=true. Elapsed: 20.315455953s
Feb 17 15:04:39.079: INFO: Pod "pod-subpath-test-downwardapi-hq55": Phase="Running", Reason="", readiness=true. Elapsed: 22.321379224s
Feb 17 15:04:41.087: INFO: Pod "pod-subpath-test-downwardapi-hq55": Phase="Running", Reason="", readiness=true. Elapsed: 24.32915806s
Feb 17 15:04:43.100: INFO: Pod "pod-subpath-test-downwardapi-hq55": Phase="Running", Reason="", readiness=true. Elapsed: 26.342020677s
Feb 17 15:04:45.110: INFO: Pod "pod-subpath-test-downwardapi-hq55": Phase="Running", Reason="", readiness=true. Elapsed: 28.35183776s
Feb 17 15:04:47.115: INFO: Pod "pod-subpath-test-downwardapi-hq55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.357281042s
STEP: Saw pod success
Feb 17 15:04:47.115: INFO: Pod "pod-subpath-test-downwardapi-hq55" satisfied condition "success or failure"
Feb 17 15:04:47.118: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-hq55 container test-container-subpath-downwardapi-hq55: 
STEP: delete the pod
Feb 17 15:04:47.171: INFO: Waiting for pod pod-subpath-test-downwardapi-hq55 to disappear
Feb 17 15:04:47.179: INFO: Pod pod-subpath-test-downwardapi-hq55 no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-hq55
Feb 17 15:04:47.179: INFO: Deleting pod "pod-subpath-test-downwardapi-hq55" in namespace "subpath-6430"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 15:04:47.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6430" for this suite.
Feb 17 15:04:53.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 15:04:53.290: INFO: namespace subpath-6430 deletion completed in 6.104746834s

• [SLOW TEST:37.168 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 15:04:53.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0217 15:05:05.936805       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 17 15:05:05.936: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 15:05:05.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6505" for this suite.
Feb 17 15:05:11.977: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 15:05:12.102: INFO: namespace gc-6505 deletion completed in 6.159032942s

• [SLOW TEST:18.811 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 15:05:12.103: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 17 15:05:12.207: INFO: Waiting up to 5m0s for pod "downwardapi-volume-964eafd5-66a4-4b3d-9143-5cf52d6afbc1" in namespace "projected-4096" to be "success or failure"
Feb 17 15:05:12.215: INFO: Pod "downwardapi-volume-964eafd5-66a4-4b3d-9143-5cf52d6afbc1": Phase="Pending", Reason="", readiness=false. Elapsed: 7.704768ms
Feb 17 15:05:14.232: INFO: Pod "downwardapi-volume-964eafd5-66a4-4b3d-9143-5cf52d6afbc1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024721652s
Feb 17 15:05:16.238: INFO: Pod "downwardapi-volume-964eafd5-66a4-4b3d-9143-5cf52d6afbc1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030314535s
Feb 17 15:05:18.248: INFO: Pod "downwardapi-volume-964eafd5-66a4-4b3d-9143-5cf52d6afbc1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040446088s
Feb 17 15:05:20.260: INFO: Pod "downwardapi-volume-964eafd5-66a4-4b3d-9143-5cf52d6afbc1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052513272s
Feb 17 15:05:22.278: INFO: Pod "downwardapi-volume-964eafd5-66a4-4b3d-9143-5cf52d6afbc1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.070519405s
STEP: Saw pod success
Feb 17 15:05:22.278: INFO: Pod "downwardapi-volume-964eafd5-66a4-4b3d-9143-5cf52d6afbc1" satisfied condition "success or failure"
Feb 17 15:05:22.282: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-964eafd5-66a4-4b3d-9143-5cf52d6afbc1 container client-container: 
STEP: delete the pod
Feb 17 15:05:22.326: INFO: Waiting for pod downwardapi-volume-964eafd5-66a4-4b3d-9143-5cf52d6afbc1 to disappear
Feb 17 15:05:22.376: INFO: Pod downwardapi-volume-964eafd5-66a4-4b3d-9143-5cf52d6afbc1 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 15:05:22.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4096" for this suite.
Feb 17 15:05:28.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 15:05:28.606: INFO: namespace projected-4096 deletion completed in 6.18134599s

• [SLOW TEST:16.503 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 15:05:28.607: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4455.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-4455.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4455.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4455.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-4455.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4455.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 17 15:05:42.785: INFO: Unable to read wheezy_udp@PodARecord from pod dns-4455/dns-test-251e6895-d881-4ca0-9614-6c1cbb5cd3f8: the server could not find the requested resource (get pods dns-test-251e6895-d881-4ca0-9614-6c1cbb5cd3f8)
Feb 17 15:05:42.789: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-4455/dns-test-251e6895-d881-4ca0-9614-6c1cbb5cd3f8: the server could not find the requested resource (get pods dns-test-251e6895-d881-4ca0-9614-6c1cbb5cd3f8)
Feb 17 15:05:42.794: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-4455.svc.cluster.local from pod dns-4455/dns-test-251e6895-d881-4ca0-9614-6c1cbb5cd3f8: the server could not find the requested resource (get pods dns-test-251e6895-d881-4ca0-9614-6c1cbb5cd3f8)
Feb 17 15:05:42.798: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-4455/dns-test-251e6895-d881-4ca0-9614-6c1cbb5cd3f8: the server could not find the requested resource (get pods dns-test-251e6895-d881-4ca0-9614-6c1cbb5cd3f8)
Feb 17 15:05:42.802: INFO: Unable to read jessie_udp@PodARecord from pod dns-4455/dns-test-251e6895-d881-4ca0-9614-6c1cbb5cd3f8: the server could not find the requested resource (get pods dns-test-251e6895-d881-4ca0-9614-6c1cbb5cd3f8)
Feb 17 15:05:42.806: INFO: Unable to read jessie_tcp@PodARecord from pod dns-4455/dns-test-251e6895-d881-4ca0-9614-6c1cbb5cd3f8: the server could not find the requested resource (get pods dns-test-251e6895-d881-4ca0-9614-6c1cbb5cd3f8)
Feb 17 15:05:42.806: INFO: Lookups using dns-4455/dns-test-251e6895-d881-4ca0-9614-6c1cbb5cd3f8 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-4455.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb 17 15:05:47.881: INFO: DNS probes using dns-4455/dns-test-251e6895-d881-4ca0-9614-6c1cbb5cd3f8 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 15:05:47.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4455" for this suite.
Feb 17 15:05:54.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 15:05:54.203: INFO: namespace dns-4455 deletion completed in 6.238940665s

• [SLOW TEST:25.597 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 15:05:54.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-62ff583f-3b32-47da-a1c9-1ae29e073080
STEP: Creating a pod to test consume configMaps
Feb 17 15:05:54.336: INFO: Waiting up to 5m0s for pod "pod-configmaps-a59e1c36-97c3-4d5a-9672-b808af5204e8" in namespace "configmap-7267" to be "success or failure"
Feb 17 15:05:54.344: INFO: Pod "pod-configmaps-a59e1c36-97c3-4d5a-9672-b808af5204e8": Phase="Pending", Reason="", readiness=false. Elapsed: 7.987735ms
Feb 17 15:05:56.354: INFO: Pod "pod-configmaps-a59e1c36-97c3-4d5a-9672-b808af5204e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018254354s
Feb 17 15:05:58.362: INFO: Pod "pod-configmaps-a59e1c36-97c3-4d5a-9672-b808af5204e8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026106544s
Feb 17 15:06:00.370: INFO: Pod "pod-configmaps-a59e1c36-97c3-4d5a-9672-b808af5204e8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0335956s
Feb 17 15:06:02.387: INFO: Pod "pod-configmaps-a59e1c36-97c3-4d5a-9672-b808af5204e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.050572085s
STEP: Saw pod success
Feb 17 15:06:02.387: INFO: Pod "pod-configmaps-a59e1c36-97c3-4d5a-9672-b808af5204e8" satisfied condition "success or failure"
Feb 17 15:06:02.393: INFO: Trying to get logs from node iruya-node pod pod-configmaps-a59e1c36-97c3-4d5a-9672-b808af5204e8 container configmap-volume-test: 
STEP: delete the pod
Feb 17 15:06:02.525: INFO: Waiting for pod pod-configmaps-a59e1c36-97c3-4d5a-9672-b808af5204e8 to disappear
Feb 17 15:06:02.537: INFO: Pod pod-configmaps-a59e1c36-97c3-4d5a-9672-b808af5204e8 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 15:06:02.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7267" for this suite.
Feb 17 15:06:08.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 15:06:08.685: INFO: namespace configmap-7267 deletion completed in 6.143042305s

• [SLOW TEST:14.481 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 15:06:08.687: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb 17 15:06:08.793: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 15:06:24.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9479" for this suite.
Feb 17 15:06:30.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 15:06:30.633: INFO: namespace init-container-9479 deletion completed in 6.164522359s

• [SLOW TEST:21.946 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 15:06:30.633: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-48da0a74-6402-41e7-b27f-c20e9c48aa0c in namespace container-probe-4756
Feb 17 15:06:40.819: INFO: Started pod liveness-48da0a74-6402-41e7-b27f-c20e9c48aa0c in namespace container-probe-4756
STEP: checking the pod's current state and verifying that restartCount is present
Feb 17 15:06:40.825: INFO: Initial restart count of pod liveness-48da0a74-6402-41e7-b27f-c20e9c48aa0c is 0
Feb 17 15:07:01.040: INFO: Restart count of pod container-probe-4756/liveness-48da0a74-6402-41e7-b27f-c20e9c48aa0c is now 1 (20.214961168s elapsed)
Feb 17 15:07:21.256: INFO: Restart count of pod container-probe-4756/liveness-48da0a74-6402-41e7-b27f-c20e9c48aa0c is now 2 (40.430987581s elapsed)
Feb 17 15:07:41.402: INFO: Restart count of pod container-probe-4756/liveness-48da0a74-6402-41e7-b27f-c20e9c48aa0c is now 3 (1m0.576763064s elapsed)
Feb 17 15:08:01.485: INFO: Restart count of pod container-probe-4756/liveness-48da0a74-6402-41e7-b27f-c20e9c48aa0c is now 4 (1m20.660346594s elapsed)
Feb 17 15:09:03.896: INFO: Restart count of pod container-probe-4756/liveness-48da0a74-6402-41e7-b27f-c20e9c48aa0c is now 5 (2m23.070942341s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 15:09:03.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4756" for this suite.
Feb 17 15:09:10.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 15:09:10.104: INFO: namespace container-probe-4756 deletion completed in 6.110403478s

• [SLOW TEST:159.472 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 15:09:10.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0217 15:09:13.234225       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 17 15:09:13.234: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 15:09:13.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8969" for this suite.
Feb 17 15:09:19.498: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 15:09:19.608: INFO: namespace gc-8969 deletion completed in 6.366890804s

• [SLOW TEST:9.503 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 15:09:19.609: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-f49389b7-c443-4ee6-a722-8bea389dbd08
STEP: Creating a pod to test consume configMaps
Feb 17 15:09:19.716: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ad147186-53aa-4cb5-a04a-2bca744be58a" in namespace "projected-6430" to be "success or failure"
Feb 17 15:09:19.726: INFO: Pod "pod-projected-configmaps-ad147186-53aa-4cb5-a04a-2bca744be58a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.357714ms
Feb 17 15:09:21.734: INFO: Pod "pod-projected-configmaps-ad147186-53aa-4cb5-a04a-2bca744be58a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017214447s
Feb 17 15:09:23.745: INFO: Pod "pod-projected-configmaps-ad147186-53aa-4cb5-a04a-2bca744be58a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028288716s
Feb 17 15:09:25.761: INFO: Pod "pod-projected-configmaps-ad147186-53aa-4cb5-a04a-2bca744be58a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044374241s
Feb 17 15:09:27.777: INFO: Pod "pod-projected-configmaps-ad147186-53aa-4cb5-a04a-2bca744be58a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05985781s
Feb 17 15:09:29.797: INFO: Pod "pod-projected-configmaps-ad147186-53aa-4cb5-a04a-2bca744be58a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.080447948s
STEP: Saw pod success
Feb 17 15:09:29.797: INFO: Pod "pod-projected-configmaps-ad147186-53aa-4cb5-a04a-2bca744be58a" satisfied condition "success or failure"
Feb 17 15:09:29.809: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-ad147186-53aa-4cb5-a04a-2bca744be58a container projected-configmap-volume-test: 
STEP: delete the pod
Feb 17 15:09:29.999: INFO: Waiting for pod pod-projected-configmaps-ad147186-53aa-4cb5-a04a-2bca744be58a to disappear
Feb 17 15:09:30.030: INFO: Pod pod-projected-configmaps-ad147186-53aa-4cb5-a04a-2bca744be58a no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 15:09:30.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6430" for this suite.
Feb 17 15:09:36.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 15:09:36.210: INFO: namespace projected-6430 deletion completed in 6.173651849s

• [SLOW TEST:16.601 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 15:09:36.210: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 17 15:09:36.397: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Feb 17 15:09:39.725: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 15:09:40.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-9194" for this suite.
Feb 17 15:09:49.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 15:09:49.995: INFO: namespace replication-controller-9194 deletion completed in 9.184195649s

• [SLOW TEST:13.785 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 15:09:49.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 17 15:09:50.107: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 15:09:58.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3421" for this suite.
Feb 17 15:10:44.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 15:10:44.387: INFO: namespace pods-3421 deletion completed in 46.172993653s

• [SLOW TEST:54.392 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 15:10:44.388: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb 17 15:10:44.506: INFO: Waiting up to 5m0s for pod "pod-8a25c846-7a51-40dc-9965-4faaa6310bb1" in namespace "emptydir-8699" to be "success or failure"
Feb 17 15:10:44.513: INFO: Pod "pod-8a25c846-7a51-40dc-9965-4faaa6310bb1": Phase="Pending", Reason="", readiness=false. Elapsed: 7.061732ms
Feb 17 15:10:46.520: INFO: Pod "pod-8a25c846-7a51-40dc-9965-4faaa6310bb1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014098634s
Feb 17 15:10:49.513: INFO: Pod "pod-8a25c846-7a51-40dc-9965-4faaa6310bb1": Phase="Pending", Reason="", readiness=false. Elapsed: 5.007013358s
Feb 17 15:10:51.521: INFO: Pod "pod-8a25c846-7a51-40dc-9965-4faaa6310bb1": Phase="Pending", Reason="", readiness=false. Elapsed: 7.015478277s
Feb 17 15:10:53.531: INFO: Pod "pod-8a25c846-7a51-40dc-9965-4faaa6310bb1": Phase="Pending", Reason="", readiness=false. Elapsed: 9.025493454s
Feb 17 15:10:55.541: INFO: Pod "pod-8a25c846-7a51-40dc-9965-4faaa6310bb1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.035861959s
STEP: Saw pod success
Feb 17 15:10:55.542: INFO: Pod "pod-8a25c846-7a51-40dc-9965-4faaa6310bb1" satisfied condition "success or failure"
Feb 17 15:10:55.547: INFO: Trying to get logs from node iruya-node pod pod-8a25c846-7a51-40dc-9965-4faaa6310bb1 container test-container: 
STEP: delete the pod
Feb 17 15:10:55.619: INFO: Waiting for pod pod-8a25c846-7a51-40dc-9965-4faaa6310bb1 to disappear
Feb 17 15:10:55.626: INFO: Pod pod-8a25c846-7a51-40dc-9965-4faaa6310bb1 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 15:10:55.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8699" for this suite.
Feb 17 15:11:01.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 15:11:01.829: INFO: namespace emptydir-8699 deletion completed in 6.196717358s

• [SLOW TEST:17.441 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 15:11:01.829: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-6c5e54d9-18d5-425b-9baf-5b392c0bdbcb in namespace container-probe-4056
Feb 17 15:11:12.063: INFO: Started pod liveness-6c5e54d9-18d5-425b-9baf-5b392c0bdbcb in namespace container-probe-4056
STEP: checking the pod's current state and verifying that restartCount is present
Feb 17 15:11:12.071: INFO: Initial restart count of pod liveness-6c5e54d9-18d5-425b-9baf-5b392c0bdbcb is 0
Feb 17 15:11:34.314: INFO: Restart count of pod container-probe-4056/liveness-6c5e54d9-18d5-425b-9baf-5b392c0bdbcb is now 1 (22.242514287s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 15:11:34.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4056" for this suite.
Feb 17 15:11:40.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 15:11:40.811: INFO: namespace container-probe-4056 deletion completed in 6.21145891s

• [SLOW TEST:38.982 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 15:11:40.812: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 17 15:11:40.990: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"960c5800-e126-4773-b1b2-7a20868d4f82", Controller:(*bool)(0xc001f4bb3a), BlockOwnerDeletion:(*bool)(0xc001f4bb3b)}}
Feb 17 15:11:41.056: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"55d5d6b3-64ae-4f66-ba58-0bdc539decdc", Controller:(*bool)(0xc0020d57c2), BlockOwnerDeletion:(*bool)(0xc0020d57c3)}}
Feb 17 15:11:41.110: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"ecf27663-c55c-4303-af09-553bda098763", Controller:(*bool)(0xc000476d72), BlockOwnerDeletion:(*bool)(0xc000476d73)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 15:11:46.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8095" for this suite.
Feb 17 15:11:54.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 15:11:54.339: INFO: namespace gc-8095 deletion completed in 8.154192174s

• [SLOW TEST:13.528 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 15:11:54.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Feb 17 15:12:14.567: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3369 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 17 15:12:14.567: INFO: >>> kubeConfig: /root/.kube/config
I0217 15:12:14.667396       8 log.go:172] (0xc000479970) (0xc0017f46e0) Create stream
I0217 15:12:14.667469       8 log.go:172] (0xc000479970) (0xc0017f46e0) Stream added, broadcasting: 1
I0217 15:12:14.673942       8 log.go:172] (0xc000479970) Reply frame received for 1
I0217 15:12:14.673974       8 log.go:172] (0xc000479970) (0xc0035f8000) Create stream
I0217 15:12:14.673986       8 log.go:172] (0xc000479970) (0xc0035f8000) Stream added, broadcasting: 3
I0217 15:12:14.675468       8 log.go:172] (0xc000479970) Reply frame received for 3
I0217 15:12:14.675493       8 log.go:172] (0xc000479970) (0xc0035f80a0) Create stream
I0217 15:12:14.675503       8 log.go:172] (0xc000479970) (0xc0035f80a0) Stream added, broadcasting: 5
I0217 15:12:14.677438       8 log.go:172] (0xc000479970) Reply frame received for 5
I0217 15:12:14.815809       8 log.go:172] (0xc000479970) Data frame received for 3
I0217 15:12:14.815861       8 log.go:172] (0xc0035f8000) (3) Data frame handling
I0217 15:12:14.815888       8 log.go:172] (0xc0035f8000) (3) Data frame sent
I0217 15:12:14.992836       8 log.go:172] (0xc000479970) (0xc0035f8000) Stream removed, broadcasting: 3
I0217 15:12:14.993030       8 log.go:172] (0xc000479970) Data frame received for 1
I0217 15:12:14.993046       8 log.go:172] (0xc0017f46e0) (1) Data frame handling
I0217 15:12:14.993068       8 log.go:172] (0xc0017f46e0) (1) Data frame sent
I0217 15:12:14.993217       8 log.go:172] (0xc000479970) (0xc0017f46e0) Stream removed, broadcasting: 1
I0217 15:12:14.993328       8 log.go:172] (0xc000479970) (0xc0035f80a0) Stream removed, broadcasting: 5
I0217 15:12:14.993389       8 log.go:172] (0xc000479970) (0xc0017f46e0) Stream removed, broadcasting: 1
I0217 15:12:14.993409       8 log.go:172] (0xc000479970) (0xc0035f8000) Stream removed, broadcasting: 3
I0217 15:12:14.993424       8 log.go:172] (0xc000479970) (0xc0035f80a0) Stream removed, broadcasting: 5
Feb 17 15:12:14.993: INFO: Exec stderr: ""
I0217 15:12:14.993444       8 log.go:172] (0xc000479970) Go away received
Feb 17 15:12:14.993: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3369 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 17 15:12:14.993: INFO: >>> kubeConfig: /root/.kube/config
I0217 15:12:15.058790       8 log.go:172] (0xc000576e70) (0xc0017f4b40) Create stream
I0217 15:12:15.059158       8 log.go:172] (0xc000576e70) (0xc0017f4b40) Stream added, broadcasting: 1
I0217 15:12:15.067413       8 log.go:172] (0xc000576e70) Reply frame received for 1
I0217 15:12:15.067481       8 log.go:172] (0xc000576e70) (0xc0022380a0) Create stream
I0217 15:12:15.067493       8 log.go:172] (0xc000576e70) (0xc0022380a0) Stream added, broadcasting: 3
I0217 15:12:15.068951       8 log.go:172] (0xc000576e70) Reply frame received for 3
I0217 15:12:15.068978       8 log.go:172] (0xc000576e70) (0xc0003a0500) Create stream
I0217 15:12:15.068987       8 log.go:172] (0xc000576e70) (0xc0003a0500) Stream added, broadcasting: 5
I0217 15:12:15.070463       8 log.go:172] (0xc000576e70) Reply frame received for 5
I0217 15:12:15.179854       8 log.go:172] (0xc000576e70) Data frame received for 3
I0217 15:12:15.179954       8 log.go:172] (0xc0022380a0) (3) Data frame handling
I0217 15:12:15.179974       8 log.go:172] (0xc0022380a0) (3) Data frame sent
I0217 15:12:15.315381       8 log.go:172] (0xc000576e70) Data frame received for 1
I0217 15:12:15.315524       8 log.go:172] (0xc000576e70) (0xc0022380a0) Stream removed, broadcasting: 3
I0217 15:12:15.315608       8 log.go:172] (0xc0017f4b40) (1) Data frame handling
I0217 15:12:15.315715       8 log.go:172] (0xc000576e70) (0xc0003a0500) Stream removed, broadcasting: 5
I0217 15:12:15.315779       8 log.go:172] (0xc0017f4b40) (1) Data frame sent
I0217 15:12:15.315791       8 log.go:172] (0xc000576e70) (0xc0017f4b40) Stream removed, broadcasting: 1
I0217 15:12:15.315800       8 log.go:172] (0xc000576e70) Go away received
I0217 15:12:15.315919       8 log.go:172] (0xc000576e70) (0xc0017f4b40) Stream removed, broadcasting: 1
I0217 15:12:15.315951       8 log.go:172] (0xc000576e70) (0xc0022380a0) Stream removed, broadcasting: 3
I0217 15:12:15.315960       8 log.go:172] (0xc000576e70) (0xc0003a0500) Stream removed, broadcasting: 5
Feb 17 15:12:15.315: INFO: Exec stderr: ""
Feb 17 15:12:15.316: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3369 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 17 15:12:15.316: INFO: >>> kubeConfig: /root/.kube/config
I0217 15:12:15.404906       8 log.go:172] (0xc0013ac4d0) (0xc0023363c0) Create stream
I0217 15:12:15.405021       8 log.go:172] (0xc0013ac4d0) (0xc0023363c0) Stream added, broadcasting: 1
I0217 15:12:15.418343       8 log.go:172] (0xc0013ac4d0) Reply frame received for 1
I0217 15:12:15.418480       8 log.go:172] (0xc0013ac4d0) (0xc0003a0640) Create stream
I0217 15:12:15.418498       8 log.go:172] (0xc0013ac4d0) (0xc0003a0640) Stream added, broadcasting: 3
I0217 15:12:15.420363       8 log.go:172] (0xc0013ac4d0) Reply frame received for 3
I0217 15:12:15.420392       8 log.go:172] (0xc0013ac4d0) (0xc0022381e0) Create stream
I0217 15:12:15.420408       8 log.go:172] (0xc0013ac4d0) (0xc0022381e0) Stream added, broadcasting: 5
I0217 15:12:15.421566       8 log.go:172] (0xc0013ac4d0) Reply frame received for 5
I0217 15:12:15.520707       8 log.go:172] (0xc0013ac4d0) Data frame received for 3
I0217 15:12:15.520776       8 log.go:172] (0xc0003a0640) (3) Data frame handling
I0217 15:12:15.520799       8 log.go:172] (0xc0003a0640) (3) Data frame sent
I0217 15:12:15.650136       8 log.go:172] (0xc0013ac4d0) Data frame received for 1
I0217 15:12:15.650214       8 log.go:172] (0xc0013ac4d0) (0xc0003a0640) Stream removed, broadcasting: 3
I0217 15:12:15.650320       8 log.go:172] (0xc0023363c0) (1) Data frame handling
I0217 15:12:15.650340       8 log.go:172] (0xc0023363c0) (1) Data frame sent
I0217 15:12:15.650361       8 log.go:172] (0xc0013ac4d0) (0xc0023363c0) Stream removed, broadcasting: 1
I0217 15:12:15.650467       8 log.go:172] (0xc0013ac4d0) (0xc0022381e0) Stream removed, broadcasting: 5
I0217 15:12:15.650516       8 log.go:172] (0xc0013ac4d0) (0xc0023363c0) Stream removed, broadcasting: 1
I0217 15:12:15.650536       8 log.go:172] (0xc0013ac4d0) (0xc0003a0640) Stream removed, broadcasting: 3
I0217 15:12:15.650578       8 log.go:172] (0xc0013ac4d0) (0xc0022381e0) Stream removed, broadcasting: 5
Feb 17 15:12:15.650: INFO: Exec stderr: ""
Feb 17 15:12:15.650: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3369 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 17 15:12:15.650: INFO: >>> kubeConfig: /root/.kube/config
I0217 15:12:15.651226       8 log.go:172] (0xc0013ac4d0) Go away received
I0217 15:12:15.736134       8 log.go:172] (0xc001394fd0) (0xc0003a1040) Create stream
I0217 15:12:15.736187       8 log.go:172] (0xc001394fd0) (0xc0003a1040) Stream added, broadcasting: 1
I0217 15:12:15.743188       8 log.go:172] (0xc001394fd0) Reply frame received for 1
I0217 15:12:15.743216       8 log.go:172] (0xc001394fd0) (0xc002336460) Create stream
I0217 15:12:15.743229       8 log.go:172] (0xc001394fd0) (0xc002336460) Stream added, broadcasting: 3
I0217 15:12:15.744386       8 log.go:172] (0xc001394fd0) Reply frame received for 3
I0217 15:12:15.744409       8 log.go:172] (0xc001394fd0) (0xc0003a10e0) Create stream
I0217 15:12:15.744418       8 log.go:172] (0xc001394fd0) (0xc0003a10e0) Stream added, broadcasting: 5
I0217 15:12:15.750161       8 log.go:172] (0xc001394fd0) Reply frame received for 5
I0217 15:12:15.856117       8 log.go:172] (0xc001394fd0) Data frame received for 3
I0217 15:12:15.856162       8 log.go:172] (0xc002336460) (3) Data frame handling
I0217 15:12:15.856182       8 log.go:172] (0xc002336460) (3) Data frame sent
I0217 15:12:15.973995       8 log.go:172] (0xc001394fd0) Data frame received for 1
I0217 15:12:15.974107       8 log.go:172] (0xc0003a1040) (1) Data frame handling
I0217 15:12:15.974142       8 log.go:172] (0xc0003a1040) (1) Data frame sent
I0217 15:12:15.974169       8 log.go:172] (0xc001394fd0) (0xc0003a1040) Stream removed, broadcasting: 1
I0217 15:12:15.974268       8 log.go:172] (0xc001394fd0) (0xc002336460) Stream removed, broadcasting: 3
I0217 15:12:15.974336       8 log.go:172] (0xc001394fd0) (0xc0003a10e0) Stream removed, broadcasting: 5
I0217 15:12:15.974407       8 log.go:172] (0xc001394fd0) Go away received
I0217 15:12:15.974433       8 log.go:172] (0xc001394fd0) (0xc0003a1040) Stream removed, broadcasting: 1
I0217 15:12:15.974473       8 log.go:172] (0xc001394fd0) (0xc002336460) Stream removed, broadcasting: 3
I0217 15:12:15.974626       8 log.go:172] (0xc001394fd0) (0xc0003a10e0) Stream removed, broadcasting: 5
Feb 17 15:12:15.974: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Feb 17 15:12:15.974: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3369 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 17 15:12:15.974: INFO: >>> kubeConfig: /root/.kube/config
I0217 15:12:16.063142       8 log.go:172] (0xc001395d90) (0xc0003a1680) Create stream
I0217 15:12:16.063221       8 log.go:172] (0xc001395d90) (0xc0003a1680) Stream added, broadcasting: 1
I0217 15:12:16.073167       8 log.go:172] (0xc001395d90) Reply frame received for 1
I0217 15:12:16.073244       8 log.go:172] (0xc001395d90) (0xc0035f8140) Create stream
I0217 15:12:16.073255       8 log.go:172] (0xc001395d90) (0xc0035f8140) Stream added, broadcasting: 3
I0217 15:12:16.074462       8 log.go:172] (0xc001395d90) Reply frame received for 3
I0217 15:12:16.074486       8 log.go:172] (0xc001395d90) (0xc0017f4be0) Create stream
I0217 15:12:16.074496       8 log.go:172] (0xc001395d90) (0xc0017f4be0) Stream added, broadcasting: 5
I0217 15:12:16.075798       8 log.go:172] (0xc001395d90) Reply frame received for 5
I0217 15:12:16.149650       8 log.go:172] (0xc001395d90) Data frame received for 3
I0217 15:12:16.149950       8 log.go:172] (0xc0035f8140) (3) Data frame handling
I0217 15:12:16.149980       8 log.go:172] (0xc0035f8140) (3) Data frame sent
I0217 15:12:16.258417       8 log.go:172] (0xc001395d90) (0xc0035f8140) Stream removed, broadcasting: 3
I0217 15:12:16.258524       8 log.go:172] (0xc001395d90) Data frame received for 1
I0217 15:12:16.258541       8 log.go:172] (0xc0003a1680) (1) Data frame handling
I0217 15:12:16.258573       8 log.go:172] (0xc0003a1680) (1) Data frame sent
I0217 15:12:16.258593       8 log.go:172] (0xc001395d90) (0xc0003a1680) Stream removed, broadcasting: 1
I0217 15:12:16.258678       8 log.go:172] (0xc001395d90) (0xc0017f4be0) Stream removed, broadcasting: 5
I0217 15:12:16.258766       8 log.go:172] (0xc001395d90) (0xc0003a1680) Stream removed, broadcasting: 1
I0217 15:12:16.258798       8 log.go:172] (0xc001395d90) (0xc0035f8140) Stream removed, broadcasting: 3
I0217 15:12:16.258822       8 log.go:172] (0xc001395d90) (0xc0017f4be0) Stream removed, broadcasting: 5
Feb 17 15:12:16.258: INFO: Exec stderr: ""
Feb 17 15:12:16.258: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3369 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 17 15:12:16.258: INFO: >>> kubeConfig: /root/.kube/config
I0217 15:12:16.259032       8 log.go:172] (0xc001395d90) Go away received
I0217 15:12:16.316659       8 log.go:172] (0xc0016ce0b0) (0xc0017f5040) Create stream
I0217 15:12:16.316760       8 log.go:172] (0xc0016ce0b0) (0xc0017f5040) Stream added, broadcasting: 1
I0217 15:12:16.322892       8 log.go:172] (0xc0016ce0b0) Reply frame received for 1
I0217 15:12:16.322926       8 log.go:172] (0xc0016ce0b0) (0xc0017f5220) Create stream
I0217 15:12:16.322937       8 log.go:172] (0xc0016ce0b0) (0xc0017f5220) Stream added, broadcasting: 3
I0217 15:12:16.324237       8 log.go:172] (0xc0016ce0b0) Reply frame received for 3
I0217 15:12:16.324270       8 log.go:172] (0xc0016ce0b0) (0xc0035f81e0) Create stream
I0217 15:12:16.324281       8 log.go:172] (0xc0016ce0b0) (0xc0035f81e0) Stream added, broadcasting: 5
I0217 15:12:16.325474       8 log.go:172] (0xc0016ce0b0) Reply frame received for 5
I0217 15:12:16.405343       8 log.go:172] (0xc0016ce0b0) Data frame received for 3
I0217 15:12:16.405387       8 log.go:172] (0xc0017f5220) (3) Data frame handling
I0217 15:12:16.405410       8 log.go:172] (0xc0017f5220) (3) Data frame sent
I0217 15:12:16.555853       8 log.go:172] (0xc0016ce0b0) (0xc0017f5220) Stream removed, broadcasting: 3
I0217 15:12:16.555982       8 log.go:172] (0xc0016ce0b0) Data frame received for 1
I0217 15:12:16.556005       8 log.go:172] (0xc0017f5040) (1) Data frame handling
I0217 15:12:16.556021       8 log.go:172] (0xc0017f5040) (1) Data frame sent
I0217 15:12:16.556103       8 log.go:172] (0xc0016ce0b0) (0xc0035f81e0) Stream removed, broadcasting: 5
I0217 15:12:16.556142       8 log.go:172] (0xc0016ce0b0) (0xc0017f5040) Stream removed, broadcasting: 1
I0217 15:12:16.556156       8 log.go:172] (0xc0016ce0b0) Go away received
I0217 15:12:16.556304       8 log.go:172] (0xc0016ce0b0) (0xc0017f5040) Stream removed, broadcasting: 1
I0217 15:12:16.556340       8 log.go:172] (0xc0016ce0b0) (0xc0017f5220) Stream removed, broadcasting: 3
I0217 15:12:16.556362       8 log.go:172] (0xc0016ce0b0) (0xc0035f81e0) Stream removed, broadcasting: 5
Feb 17 15:12:16.556: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Feb 17 15:12:16.556: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3369 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 17 15:12:16.556: INFO: >>> kubeConfig: /root/.kube/config
I0217 15:12:16.614934       8 log.go:172] (0xc001a1d080) (0xc0035f8500) Create stream
I0217 15:12:16.614983       8 log.go:172] (0xc001a1d080) (0xc0035f8500) Stream added, broadcasting: 1
I0217 15:12:16.623784       8 log.go:172] (0xc001a1d080) Reply frame received for 1
I0217 15:12:16.623984       8 log.go:172] (0xc001a1d080) (0xc002336640) Create stream
I0217 15:12:16.624013       8 log.go:172] (0xc001a1d080) (0xc002336640) Stream added, broadcasting: 3
I0217 15:12:16.627210       8 log.go:172] (0xc001a1d080) Reply frame received for 3
I0217 15:12:16.627239       8 log.go:172] (0xc001a1d080) (0xc0035f85a0) Create stream
I0217 15:12:16.627250       8 log.go:172] (0xc001a1d080) (0xc0035f85a0) Stream added, broadcasting: 5
I0217 15:12:16.630962       8 log.go:172] (0xc001a1d080) Reply frame received for 5
I0217 15:12:16.774188       8 log.go:172] (0xc001a1d080) Data frame received for 3
I0217 15:12:16.774250       8 log.go:172] (0xc002336640) (3) Data frame handling
I0217 15:12:16.774281       8 log.go:172] (0xc002336640) (3) Data frame sent
I0217 15:12:16.967854       8 log.go:172] (0xc001a1d080) (0xc002336640) Stream removed, broadcasting: 3
I0217 15:12:16.968142       8 log.go:172] (0xc001a1d080) Data frame received for 1
I0217 15:12:16.968168       8 log.go:172] (0xc0035f8500) (1) Data frame handling
I0217 15:12:16.968227       8 log.go:172] (0xc0035f8500) (1) Data frame sent
I0217 15:12:16.968412       8 log.go:172] (0xc001a1d080) (0xc0035f8500) Stream removed, broadcasting: 1
I0217 15:12:16.968704       8 log.go:172] (0xc001a1d080) (0xc0035f85a0) Stream removed, broadcasting: 5
I0217 15:12:16.968787       8 log.go:172] (0xc001a1d080) Go away received
I0217 15:12:16.968874       8 log.go:172] (0xc001a1d080) (0xc0035f8500) Stream removed, broadcasting: 1
I0217 15:12:16.968888       8 log.go:172] (0xc001a1d080) (0xc002336640) Stream removed, broadcasting: 3
I0217 15:12:16.968906       8 log.go:172] (0xc001a1d080) (0xc0035f85a0) Stream removed, broadcasting: 5
Feb 17 15:12:16.968: INFO: Exec stderr: ""
Feb 17 15:12:16.969: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3369 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 17 15:12:16.969: INFO: >>> kubeConfig: /root/.kube/config
I0217 15:12:17.084270       8 log.go:172] (0xc001694d10) (0xc0003a1b80) Create stream
I0217 15:12:17.084664       8 log.go:172] (0xc001694d10) (0xc0003a1b80) Stream added, broadcasting: 1
I0217 15:12:17.094734       8 log.go:172] (0xc001694d10) Reply frame received for 1
I0217 15:12:17.094767       8 log.go:172] (0xc001694d10) (0xc0003a1d60) Create stream
I0217 15:12:17.094775       8 log.go:172] (0xc001694d10) (0xc0003a1d60) Stream added, broadcasting: 3
I0217 15:12:17.096555       8 log.go:172] (0xc001694d10) Reply frame received for 3
I0217 15:12:17.096591       8 log.go:172] (0xc001694d10) (0xc002336780) Create stream
I0217 15:12:17.096608       8 log.go:172] (0xc001694d10) (0xc002336780) Stream added, broadcasting: 5
I0217 15:12:17.097853       8 log.go:172] (0xc001694d10) Reply frame received for 5
I0217 15:12:17.267825       8 log.go:172] (0xc001694d10) Data frame received for 3
I0217 15:12:17.267910       8 log.go:172] (0xc0003a1d60) (3) Data frame handling
I0217 15:12:17.267940       8 log.go:172] (0xc0003a1d60) (3) Data frame sent
I0217 15:12:17.458702       8 log.go:172] (0xc001694d10) (0xc0003a1d60) Stream removed, broadcasting: 3
I0217 15:12:17.458899       8 log.go:172] (0xc001694d10) Data frame received for 1
I0217 15:12:17.458908       8 log.go:172] (0xc0003a1b80) (1) Data frame handling
I0217 15:12:17.458920       8 log.go:172] (0xc0003a1b80) (1) Data frame sent
I0217 15:12:17.458930       8 log.go:172] (0xc001694d10) (0xc0003a1b80) Stream removed, broadcasting: 1
I0217 15:12:17.459052       8 log.go:172] (0xc001694d10) (0xc002336780) Stream removed, broadcasting: 5
I0217 15:12:17.459107       8 log.go:172] (0xc001694d10) (0xc0003a1b80) Stream removed, broadcasting: 1
I0217 15:12:17.459116       8 log.go:172] (0xc001694d10) (0xc0003a1d60) Stream removed, broadcasting: 3
I0217 15:12:17.459123       8 log.go:172] (0xc001694d10) (0xc002336780) Stream removed, broadcasting: 5
Feb 17 15:12:17.459: INFO: Exec stderr: ""
Feb 17 15:12:17.459: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3369 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 17 15:12:17.459: INFO: >>> kubeConfig: /root/.kube/config
I0217 15:12:17.463127       8 log.go:172] (0xc001694d10) Go away received
I0217 15:12:17.544479       8 log.go:172] (0xc0013adef0) (0xc002336b40) Create stream
I0217 15:12:17.544637       8 log.go:172] (0xc0013adef0) (0xc002336b40) Stream added, broadcasting: 1
I0217 15:12:17.550112       8 log.go:172] (0xc0013adef0) Reply frame received for 1
I0217 15:12:17.550148       8 log.go:172] (0xc0013adef0) (0xc0003a1f40) Create stream
I0217 15:12:17.550159       8 log.go:172] (0xc0013adef0) (0xc0003a1f40) Stream added, broadcasting: 3
I0217 15:12:17.551884       8 log.go:172] (0xc0013adef0) Reply frame received for 3
I0217 15:12:17.551928       8 log.go:172] (0xc0013adef0) (0xc0035f8640) Create stream
I0217 15:12:17.551954       8 log.go:172] (0xc0013adef0) (0xc0035f8640) Stream added, broadcasting: 5
I0217 15:12:17.554193       8 log.go:172] (0xc0013adef0) Reply frame received for 5
I0217 15:12:17.715987       8 log.go:172] (0xc0013adef0) Data frame received for 3
I0217 15:12:17.716164       8 log.go:172] (0xc0003a1f40) (3) Data frame handling
I0217 15:12:17.716213       8 log.go:172] (0xc0003a1f40) (3) Data frame sent
I0217 15:12:17.928571       8 log.go:172] (0xc0013adef0) Data frame received for 1
I0217 15:12:17.928664       8 log.go:172] (0xc002336b40) (1) Data frame handling
I0217 15:12:17.928696       8 log.go:172] (0xc002336b40) (1) Data frame sent
I0217 15:12:17.928718       8 log.go:172] (0xc0013adef0) (0xc002336b40) Stream removed, broadcasting: 1
I0217 15:12:17.929019       8 log.go:172] (0xc0013adef0) (0xc0003a1f40) Stream removed, broadcasting: 3
I0217 15:12:17.929201       8 log.go:172] (0xc0013adef0) (0xc0035f8640) Stream removed, broadcasting: 5
I0217 15:12:17.929272       8 log.go:172] (0xc0013adef0) (0xc002336b40) Stream removed, broadcasting: 1
I0217 15:12:17.929286       8 log.go:172] (0xc0013adef0) (0xc0003a1f40) Stream removed, broadcasting: 3
I0217 15:12:17.929321       8 log.go:172] (0xc0013adef0) (0xc0035f8640) Stream removed, broadcasting: 5
Feb 17 15:12:17.929: INFO: Exec stderr: ""
I0217 15:12:17.929688       8 log.go:172] (0xc0013adef0) Go away received
Feb 17 15:12:17.929: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3369 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 17 15:12:17.929: INFO: >>> kubeConfig: /root/.kube/config
I0217 15:12:18.018106       8 log.go:172] (0xc001abcb00) (0xc002336f00) Create stream
I0217 15:12:18.018248       8 log.go:172] (0xc001abcb00) (0xc002336f00) Stream added, broadcasting: 1
I0217 15:12:18.028074       8 log.go:172] (0xc001abcb00) Reply frame received for 1
I0217 15:12:18.028116       8 log.go:172] (0xc001abcb00) (0xc0035f8820) Create stream
I0217 15:12:18.028164       8 log.go:172] (0xc001abcb00) (0xc0035f8820) Stream added, broadcasting: 3
I0217 15:12:18.030246       8 log.go:172] (0xc001abcb00) Reply frame received for 3
I0217 15:12:18.030293       8 log.go:172] (0xc001abcb00) (0xc0017f5400) Create stream
I0217 15:12:18.030313       8 log.go:172] (0xc001abcb00) (0xc0017f5400) Stream added, broadcasting: 5
I0217 15:12:18.031631       8 log.go:172] (0xc001abcb00) Reply frame received for 5
I0217 15:12:18.190303       8 log.go:172] (0xc001abcb00) Data frame received for 3
I0217 15:12:18.190349       8 log.go:172] (0xc0035f8820) (3) Data frame handling
I0217 15:12:18.190386       8 log.go:172] (0xc0035f8820) (3) Data frame sent
I0217 15:12:18.287990       8 log.go:172] (0xc001abcb00) (0xc0035f8820) Stream removed, broadcasting: 3
I0217 15:12:18.288360       8 log.go:172] (0xc001abcb00) Data frame received for 1
I0217 15:12:18.288391       8 log.go:172] (0xc002336f00) (1) Data frame handling
I0217 15:12:18.288416       8 log.go:172] (0xc002336f00) (1) Data frame sent
I0217 15:12:18.288437       8 log.go:172] (0xc001abcb00) (0xc0017f5400) Stream removed, broadcasting: 5
I0217 15:12:18.288499       8 log.go:172] (0xc001abcb00) (0xc002336f00) Stream removed, broadcasting: 1
I0217 15:12:18.288529       8 log.go:172] (0xc001abcb00) Go away received
I0217 15:12:18.288679       8 log.go:172] (0xc001abcb00) (0xc002336f00) Stream removed, broadcasting: 1
I0217 15:12:18.288703       8 log.go:172] (0xc001abcb00) (0xc0035f8820) Stream removed, broadcasting: 3
I0217 15:12:18.288720       8 log.go:172] (0xc001abcb00) (0xc0017f5400) Stream removed, broadcasting: 5
Feb 17 15:12:18.288: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 15:12:18.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-3369" for this suite.
Feb 17 15:13:10.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 15:13:10.515: INFO: namespace e2e-kubelet-etc-hosts-3369 deletion completed in 52.219427207s

• [SLOW TEST:76.175 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 15:13:10.518: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Feb 17 15:13:10.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8365'
Feb 17 15:13:13.835: INFO: stderr: ""
Feb 17 15:13:13.835: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 17 15:13:13.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8365'
Feb 17 15:13:14.184: INFO: stderr: ""
Feb 17 15:13:14.184: INFO: stdout: "update-demo-nautilus-7s6n6 update-demo-nautilus-tr5pm "
Feb 17 15:13:14.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7s6n6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8365'
Feb 17 15:13:14.361: INFO: stderr: ""
Feb 17 15:13:14.361: INFO: stdout: ""
Feb 17 15:13:14.361: INFO: update-demo-nautilus-7s6n6 is created but not running
Feb 17 15:13:19.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8365'
Feb 17 15:13:19.928: INFO: stderr: ""
Feb 17 15:13:19.928: INFO: stdout: "update-demo-nautilus-7s6n6 update-demo-nautilus-tr5pm "
Feb 17 15:13:19.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7s6n6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8365'
Feb 17 15:13:20.445: INFO: stderr: ""
Feb 17 15:13:20.445: INFO: stdout: ""
Feb 17 15:13:20.445: INFO: update-demo-nautilus-7s6n6 is created but not running
Feb 17 15:13:25.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8365'
Feb 17 15:13:25.632: INFO: stderr: ""
Feb 17 15:13:25.632: INFO: stdout: "update-demo-nautilus-7s6n6 update-demo-nautilus-tr5pm "
Feb 17 15:13:25.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7s6n6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8365'
Feb 17 15:13:25.751: INFO: stderr: ""
Feb 17 15:13:25.751: INFO: stdout: "true"
Feb 17 15:13:25.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7s6n6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8365'
Feb 17 15:13:25.848: INFO: stderr: ""
Feb 17 15:13:25.848: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 17 15:13:25.848: INFO: validating pod update-demo-nautilus-7s6n6
Feb 17 15:13:25.875: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 17 15:13:25.875: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 17 15:13:25.875: INFO: update-demo-nautilus-7s6n6 is verified up and running
Feb 17 15:13:25.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tr5pm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8365'
Feb 17 15:13:25.996: INFO: stderr: ""
Feb 17 15:13:25.996: INFO: stdout: "true"
Feb 17 15:13:25.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tr5pm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8365'
Feb 17 15:13:26.120: INFO: stderr: ""
Feb 17 15:13:26.120: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 17 15:13:26.120: INFO: validating pod update-demo-nautilus-tr5pm
Feb 17 15:13:26.160: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 17 15:13:26.160: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 17 15:13:26.160: INFO: update-demo-nautilus-tr5pm is verified up and running
STEP: using delete to clean up resources
Feb 17 15:13:26.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8365'
Feb 17 15:13:26.317: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 17 15:13:26.318: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 17 15:13:26.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8365'
Feb 17 15:13:26.437: INFO: stderr: "No resources found.\n"
Feb 17 15:13:26.437: INFO: stdout: ""
Feb 17 15:13:26.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8365 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 17 15:13:26.541: INFO: stderr: ""
Feb 17 15:13:26.541: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 15:13:26.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8365" for this suite.
Feb 17 15:13:48.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 15:13:48.689: INFO: namespace kubectl-8365 deletion completed in 22.130977954s

• [SLOW TEST:38.171 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 15:13:48.690: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Feb 17 15:14:00.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-32708304-9e3c-4a7f-86a1-d48808c8303e -c busybox-main-container --namespace=emptydir-7692 -- cat /usr/share/volumeshare/shareddata.txt'
Feb 17 15:14:01.285: INFO: stderr: "I0217 15:14:01.039690    3694 log.go:172] (0xc000890370) (0xc0005f2aa0) Create stream\nI0217 15:14:01.039870    3694 log.go:172] (0xc000890370) (0xc0005f2aa0) Stream added, broadcasting: 1\nI0217 15:14:01.045873    3694 log.go:172] (0xc000890370) Reply frame received for 1\nI0217 15:14:01.045903    3694 log.go:172] (0xc000890370) (0xc0005f2b40) Create stream\nI0217 15:14:01.045910    3694 log.go:172] (0xc000890370) (0xc0005f2b40) Stream added, broadcasting: 3\nI0217 15:14:01.047330    3694 log.go:172] (0xc000890370) Reply frame received for 3\nI0217 15:14:01.047361    3694 log.go:172] (0xc000890370) (0xc0005f2be0) Create stream\nI0217 15:14:01.047372    3694 log.go:172] (0xc000890370) (0xc0005f2be0) Stream added, broadcasting: 5\nI0217 15:14:01.051392    3694 log.go:172] (0xc000890370) Reply frame received for 5\nI0217 15:14:01.149629    3694 log.go:172] (0xc000890370) Data frame received for 3\nI0217 15:14:01.149665    3694 log.go:172] (0xc0005f2b40) (3) Data frame handling\nI0217 15:14:01.149679    3694 log.go:172] (0xc0005f2b40) (3) Data frame sent\nI0217 15:14:01.276316    3694 log.go:172] (0xc000890370) (0xc0005f2b40) Stream removed, broadcasting: 3\nI0217 15:14:01.276487    3694 log.go:172] (0xc000890370) Data frame received for 1\nI0217 15:14:01.276549    3694 log.go:172] (0xc000890370) (0xc0005f2be0) Stream removed, broadcasting: 5\nI0217 15:14:01.276727    3694 log.go:172] (0xc0005f2aa0) (1) Data frame handling\nI0217 15:14:01.276761    3694 log.go:172] (0xc0005f2aa0) (1) Data frame sent\nI0217 15:14:01.276782    3694 log.go:172] (0xc000890370) (0xc0005f2aa0) Stream removed, broadcasting: 1\nI0217 15:14:01.276810    3694 log.go:172] (0xc000890370) Go away received\nI0217 15:14:01.277458    3694 log.go:172] (0xc000890370) (0xc0005f2aa0) Stream removed, broadcasting: 1\nI0217 15:14:01.277476    3694 log.go:172] (0xc000890370) (0xc0005f2b40) Stream removed, broadcasting: 3\nI0217 15:14:01.277484    3694 log.go:172] (0xc000890370) (0xc0005f2be0) Stream removed, broadcasting: 5\n"
Feb 17 15:14:01.286: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 15:14:01.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7692" for this suite.
Feb 17 15:14:07.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 15:14:07.440: INFO: namespace emptydir-7692 deletion completed in 6.146195217s

• [SLOW TEST:18.749 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 15:14:07.440: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 17 15:14:07.520: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Feb 17 15:14:07.531: INFO: Pod name sample-pod: Found 0 pods out of 1
Feb 17 15:14:12.548: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 17 15:14:18.566: INFO: Creating deployment "test-rolling-update-deployment"
Feb 17 15:14:18.588: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Feb 17 15:14:18.659: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Feb 17 15:14:20.677: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Feb 17 15:14:20.680: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717549258, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717549258, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717549258, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717549258, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 17 15:14:22.690: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717549258, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717549258, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717549258, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717549258, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 17 15:14:24.690: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717549258, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717549258, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717549258, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717549258, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 17 15:14:26.691: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717549258, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717549258, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717549258, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717549258, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 17 15:14:28.704: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb 17 15:14:28.726: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-6972,SelfLink:/apis/apps/v1/namespaces/deployment-6972/deployments/test-rolling-update-deployment,UID:0c1def74-c670-4085-9c62-6f856b75ff32,ResourceVersion:24715621,Generation:1,CreationTimestamp:2020-02-17 15:14:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-17 15:14:18 +0000 UTC 2020-02-17 15:14:18 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-17 15:14:26 +0000 UTC 2020-02-17 15:14:18 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb 17 15:14:28.732: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-6972,SelfLink:/apis/apps/v1/namespaces/deployment-6972/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:f4b7093a-9b3f-4ace-acfc-41aa89188ac5,ResourceVersion:24715610,Generation:1,CreationTimestamp:2020-02-17 15:14:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 0c1def74-c670-4085-9c62-6f856b75ff32 0xc002438e97 0xc002438e98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb 17 15:14:28.732: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Feb 17 15:14:28.732: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-6972,SelfLink:/apis/apps/v1/namespaces/deployment-6972/replicasets/test-rolling-update-controller,UID:b39babc5-627b-446b-a059-b6dde8f8fda1,ResourceVersion:24715620,Generation:2,CreationTimestamp:2020-02-17 15:14:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 0c1def74-c670-4085-9c62-6f856b75ff32 0xc002438daf 0xc002438dc0}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 17 15:14:28.738: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-8x5tr" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-8x5tr,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-6972,SelfLink:/api/v1/namespaces/deployment-6972/pods/test-rolling-update-deployment-79f6b9d75c-8x5tr,UID:daf00528-31b8-4bfc-ab23-a768ada6ca06,ResourceVersion:24715609,Generation:0,CreationTimestamp:2020-02-17 15:14:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c f4b7093a-9b3f-4ace-acfc-41aa89188ac5 0xc001f3c2d7 0xc001f3c2d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qkrcg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkrcg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-qkrcg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f3c350} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f3c370}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 15:14:18 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 15:14:26 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 15:14:26 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 15:14:18 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-17 15:14:18 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-17 15:14:25 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://82b465558f2caae2b7853b512daf21e0b57d2c47f23d7df763f1efe786af0264}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 15:14:28.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6972" for this suite.
Feb 17 15:14:34.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 15:14:34.872: INFO: namespace deployment-6972 deletion completed in 6.127590005s

• [SLOW TEST:27.432 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 15:14:34.872: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 17 15:14:35.068: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Feb 17 15:14:40.100: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 17 15:14:46.116: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb 17 15:14:46.226: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-3413,SelfLink:/apis/apps/v1/namespaces/deployment-3413/deployments/test-cleanup-deployment,UID:87b70d04-f991-40bf-9a97-b5a878b7b7a5,ResourceVersion:24715689,Generation:1,CreationTimestamp:2020-02-17 15:14:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Feb 17 15:14:46.234: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
Feb 17 15:14:46.234: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Feb 17 15:14:46.234: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-3413,SelfLink:/apis/apps/v1/namespaces/deployment-3413/replicasets/test-cleanup-controller,UID:952d1ab4-3b93-4b00-a530-45609d704df2,ResourceVersion:24715690,Generation:1,CreationTimestamp:2020-02-17 15:14:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 87b70d04-f991-40bf-9a97-b5a878b7b7a5 0xc001d7ced7 0xc001d7ced8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb 17 15:14:46.319: INFO: Pod "test-cleanup-controller-tchnh" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-tchnh,GenerateName:test-cleanup-controller-,Namespace:deployment-3413,SelfLink:/api/v1/namespaces/deployment-3413/pods/test-cleanup-controller-tchnh,UID:86c87cf1-6b29-4fcb-b925-80a05da51313,ResourceVersion:24715684,Generation:0,CreationTimestamp:2020-02-17 15:14:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 952d1ab4-3b93-4b00-a530-45609d704df2 0xc001d7d867 0xc001d7d868}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-swm5s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-swm5s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-swm5s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d7d920} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d7d940}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 15:14:35 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 15:14:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 15:14:45 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 15:14:35 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-17 15:14:35 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-17 15:14:44 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://135debf445fc2f7245f7bf210d289b4b7cdb93f9a0420198a62235d7adbb31cb}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 15:14:46.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3413" for this suite.
Feb 17 15:14:54.498: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 15:14:54.611: INFO: namespace deployment-3413 deletion completed in 8.200555834s

• [SLOW TEST:19.739 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 15:14:54.612: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-6c4632e2-abfb-4bad-adaf-3f70d33dccd2
STEP: Creating a pod to test consume secrets
Feb 17 15:14:54.776: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-65d22e23-fd95-4697-a40b-d42895c75468" in namespace "projected-1787" to be "success or failure"
Feb 17 15:14:54.828: INFO: Pod "pod-projected-secrets-65d22e23-fd95-4697-a40b-d42895c75468": Phase="Pending", Reason="", readiness=false. Elapsed: 51.616533ms
Feb 17 15:14:56.836: INFO: Pod "pod-projected-secrets-65d22e23-fd95-4697-a40b-d42895c75468": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059605322s
Feb 17 15:14:58.844: INFO: Pod "pod-projected-secrets-65d22e23-fd95-4697-a40b-d42895c75468": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067883689s
Feb 17 15:15:00.857: INFO: Pod "pod-projected-secrets-65d22e23-fd95-4697-a40b-d42895c75468": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081174091s
Feb 17 15:15:02.867: INFO: Pod "pod-projected-secrets-65d22e23-fd95-4697-a40b-d42895c75468": Phase="Pending", Reason="", readiness=false. Elapsed: 8.091157783s
Feb 17 15:15:04.878: INFO: Pod "pod-projected-secrets-65d22e23-fd95-4697-a40b-d42895c75468": Phase="Pending", Reason="", readiness=false. Elapsed: 10.101949778s
Feb 17 15:15:06.898: INFO: Pod "pod-projected-secrets-65d22e23-fd95-4697-a40b-d42895c75468": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.122300678s
STEP: Saw pod success
Feb 17 15:15:06.898: INFO: Pod "pod-projected-secrets-65d22e23-fd95-4697-a40b-d42895c75468" satisfied condition "success or failure"
Feb 17 15:15:06.905: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-65d22e23-fd95-4697-a40b-d42895c75468 container projected-secret-volume-test: 
STEP: delete the pod
Feb 17 15:15:07.075: INFO: Waiting for pod pod-projected-secrets-65d22e23-fd95-4697-a40b-d42895c75468 to disappear
Feb 17 15:15:07.083: INFO: Pod pod-projected-secrets-65d22e23-fd95-4697-a40b-d42895c75468 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 15:15:07.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1787" for this suite.
Feb 17 15:15:13.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 15:15:13.233: INFO: namespace projected-1787 deletion completed in 6.145492508s

• [SLOW TEST:18.622 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 15:15:13.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb 17 15:15:13.390: INFO: Waiting up to 5m0s for pod "pod-e3d177f5-a0f9-46e7-ae10-8d635c4e47f0" in namespace "emptydir-4570" to be "success or failure"
Feb 17 15:15:13.406: INFO: Pod "pod-e3d177f5-a0f9-46e7-ae10-8d635c4e47f0": Phase="Pending", Reason="", readiness=false. Elapsed: 15.551299ms
Feb 17 15:15:15.416: INFO: Pod "pod-e3d177f5-a0f9-46e7-ae10-8d635c4e47f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02607904s
Feb 17 15:15:17.428: INFO: Pod "pod-e3d177f5-a0f9-46e7-ae10-8d635c4e47f0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037175575s
Feb 17 15:15:19.438: INFO: Pod "pod-e3d177f5-a0f9-46e7-ae10-8d635c4e47f0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047569824s
Feb 17 15:15:21.448: INFO: Pod "pod-e3d177f5-a0f9-46e7-ae10-8d635c4e47f0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.058003126s
Feb 17 15:15:23.461: INFO: Pod "pod-e3d177f5-a0f9-46e7-ae10-8d635c4e47f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.070160486s
STEP: Saw pod success
Feb 17 15:15:23.461: INFO: Pod "pod-e3d177f5-a0f9-46e7-ae10-8d635c4e47f0" satisfied condition "success or failure"
Feb 17 15:15:23.466: INFO: Trying to get logs from node iruya-node pod pod-e3d177f5-a0f9-46e7-ae10-8d635c4e47f0 container test-container: 
STEP: delete the pod
Feb 17 15:15:23.615: INFO: Waiting for pod pod-e3d177f5-a0f9-46e7-ae10-8d635c4e47f0 to disappear
Feb 17 15:15:23.623: INFO: Pod pod-e3d177f5-a0f9-46e7-ae10-8d635c4e47f0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 15:15:23.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4570" for this suite.
Feb 17 15:15:29.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 15:15:29.793: INFO: namespace emptydir-4570 deletion completed in 6.155751748s

• [SLOW TEST:16.559 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 15:15:29.793: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-5733
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5733 to expose endpoints map[]
Feb 17 15:15:29.943: INFO: successfully validated that service endpoint-test2 in namespace services-5733 exposes endpoints map[] (5.389155ms elapsed)
STEP: Creating pod pod1 in namespace services-5733
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5733 to expose endpoints map[pod1:[80]]
Feb 17 15:15:34.043: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.081741421s elapsed, will retry)
Feb 17 15:15:38.167: INFO: successfully validated that service endpoint-test2 in namespace services-5733 exposes endpoints map[pod1:[80]] (8.205389003s elapsed)
STEP: Creating pod pod2 in namespace services-5733
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5733 to expose endpoints map[pod1:[80] pod2:[80]]
Feb 17 15:15:42.291: INFO: Unexpected endpoints: found map[5489926b-2182-4167-be96-2acd40cc33c4:[80]], expected map[pod1:[80] pod2:[80]] (4.111557626s elapsed, will retry)
Feb 17 15:15:45.344: INFO: successfully validated that service endpoint-test2 in namespace services-5733 exposes endpoints map[pod1:[80] pod2:[80]] (7.164458058s elapsed)
STEP: Deleting pod pod1 in namespace services-5733
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5733 to expose endpoints map[pod2:[80]]
Feb 17 15:15:46.389: INFO: successfully validated that service endpoint-test2 in namespace services-5733 exposes endpoints map[pod2:[80]] (1.033480076s elapsed)
STEP: Deleting pod pod2 in namespace services-5733
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5733 to expose endpoints map[]
Feb 17 15:15:46.454: INFO: successfully validated that service endpoint-test2 in namespace services-5733 exposes endpoints map[] (56.656031ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 15:15:46.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5733" for this suite.
Feb 17 15:16:08.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 15:16:08.737: INFO: namespace services-5733 deletion completed in 22.158368378s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:38.944 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 15:16:08.738: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 17 15:16:19.389: INFO: Successfully updated pod "pod-update-activedeadlineseconds-e03f5e78-fbb0-422f-8cf9-59ab7361eeca"
Feb 17 15:16:19.389: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-e03f5e78-fbb0-422f-8cf9-59ab7361eeca" in namespace "pods-6160" to be "terminated due to deadline exceeded"
Feb 17 15:16:19.394: INFO: Pod "pod-update-activedeadlineseconds-e03f5e78-fbb0-422f-8cf9-59ab7361eeca": Phase="Running", Reason="", readiness=true. Elapsed: 4.82057ms
Feb 17 15:16:21.460: INFO: Pod "pod-update-activedeadlineseconds-e03f5e78-fbb0-422f-8cf9-59ab7361eeca": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.071296646s
Feb 17 15:16:21.460: INFO: Pod "pod-update-activedeadlineseconds-e03f5e78-fbb0-422f-8cf9-59ab7361eeca" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 15:16:21.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6160" for this suite.
Feb 17 15:16:27.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 15:16:27.670: INFO: namespace pods-6160 deletion completed in 6.190758916s

• [SLOW TEST:18.933 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 15:16:27.671: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 17 15:16:27.819: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Feb 17 15:16:27.829: INFO: Number of nodes with available pods: 0
Feb 17 15:16:27.829: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Feb 17 15:16:27.868: INFO: Number of nodes with available pods: 0
Feb 17 15:16:27.868: INFO: Node iruya-node is running more than one daemon pod
Feb 17 15:16:28.878: INFO: Number of nodes with available pods: 0
Feb 17 15:16:28.878: INFO: Node iruya-node is running more than one daemon pod
Feb 17 15:16:29.880: INFO: Number of nodes with available pods: 0
Feb 17 15:16:29.880: INFO: Node iruya-node is running more than one daemon pod
Feb 17 15:16:30.883: INFO: Number of nodes with available pods: 0
Feb 17 15:16:30.883: INFO: Node iruya-node is running more than one daemon pod
Feb 17 15:16:31.876: INFO: Number of nodes with available pods: 0
Feb 17 15:16:31.877: INFO: Node iruya-node is running more than one daemon pod
Feb 17 15:16:32.893: INFO: Number of nodes with available pods: 0
Feb 17 15:16:32.893: INFO: Node iruya-node is running more than one daemon pod
Feb 17 15:16:33.881: INFO: Number of nodes with available pods: 0
Feb 17 15:16:33.881: INFO: Node iruya-node is running more than one daemon pod
Feb 17 15:16:34.882: INFO: Number of nodes with available pods: 0
Feb 17 15:16:34.882: INFO: Node iruya-node is running more than one daemon pod
Feb 17 15:16:35.879: INFO: Number of nodes with available pods: 1
Feb 17 15:16:35.879: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Feb 17 15:16:35.991: INFO: Number of nodes with available pods: 1
Feb 17 15:16:35.991: INFO: Number of running nodes: 0, number of available pods: 1
Feb 17 15:16:37.001: INFO: Number of nodes with available pods: 0
Feb 17 15:16:37.001: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Feb 17 15:16:37.028: INFO: Number of nodes with available pods: 0
Feb 17 15:16:37.028: INFO: Node iruya-node is running more than one daemon pod
Feb 17 15:16:38.035: INFO: Number of nodes with available pods: 0
Feb 17 15:16:38.035: INFO: Node iruya-node is running more than one daemon pod
Feb 17 15:16:39.040: INFO: Number of nodes with available pods: 0
Feb 17 15:16:39.040: INFO: Node iruya-node is running more than one daemon pod
Feb 17 15:16:40.037: INFO: Number of nodes with available pods: 0
Feb 17 15:16:40.037: INFO: Node iruya-node is running more than one daemon pod
Feb 17 15:16:41.042: INFO: Number of nodes with available pods: 0
Feb 17 15:16:41.042: INFO: Node iruya-node is running more than one daemon pod
Feb 17 15:16:42.034: INFO: Number of nodes with available pods: 0
Feb 17 15:16:42.034: INFO: Node iruya-node is running more than one daemon pod
Feb 17 15:16:43.035: INFO: Number of nodes with available pods: 0
Feb 17 15:16:43.035: INFO: Node iruya-node is running more than one daemon pod
Feb 17 15:16:44.049: INFO: Number of nodes with available pods: 0
Feb 17 15:16:44.049: INFO: Node iruya-node is running more than one daemon pod
Feb 17 15:16:45.037: INFO: Number of nodes with available pods: 0
Feb 17 15:16:45.037: INFO: Node iruya-node is running more than one daemon pod
Feb 17 15:16:46.035: INFO: Number of nodes with available pods: 0
Feb 17 15:16:46.035: INFO: Node iruya-node is running more than one daemon pod
Feb 17 15:16:47.036: INFO: Number of nodes with available pods: 0
Feb 17 15:16:47.036: INFO: Node iruya-node is running more than one daemon pod
Feb 17 15:16:48.035: INFO: Number of nodes with available pods: 0
Feb 17 15:16:48.035: INFO: Node iruya-node is running more than one daemon pod
Feb 17 15:16:49.039: INFO: Number of nodes with available pods: 0
Feb 17 15:16:49.039: INFO: Node iruya-node is running more than one daemon pod
Feb 17 15:16:50.058: INFO: Number of nodes with available pods: 0
Feb 17 15:16:50.058: INFO: Node iruya-node is running more than one daemon pod
Feb 17 15:16:51.036: INFO: Number of nodes with available pods: 0
Feb 17 15:16:51.036: INFO: Node iruya-node is running more than one daemon pod
Feb 17 15:16:52.035: INFO: Number of nodes with available pods: 1
Feb 17 15:16:52.035: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5443, will wait for the garbage collector to delete the pods
Feb 17 15:16:52.107: INFO: Deleting DaemonSet.extensions daemon-set took: 12.578599ms
Feb 17 15:16:53.107: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.000438432s
Feb 17 15:16:59.242: INFO: Number of nodes with available pods: 0
Feb 17 15:16:59.242: INFO: Number of running nodes: 0, number of available pods: 0
Feb 17 15:16:59.246: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5443/daemonsets","resourceVersion":"24716083"},"items":null}

Feb 17 15:16:59.251: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5443/pods","resourceVersion":"24716083"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 15:16:59.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5443" for this suite.
Feb 17 15:17:05.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 15:17:05.528: INFO: namespace daemonsets-5443 deletion completed in 6.231412529s

• [SLOW TEST:37.858 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 15:17:05.529: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 17 15:17:14.814: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 15:17:14.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3651" for this suite.
Feb 17 15:17:20.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 15:17:21.019: INFO: namespace container-runtime-3651 deletion completed in 6.150583954s

• [SLOW TEST:15.490 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSFeb 17 15:17:21.019: INFO: Running AfterSuite actions on all nodes
Feb 17 15:17:21.019: INFO: Running AfterSuite actions on node 1
Feb 17 15:17:21.019: INFO: Skipping dumping logs from cluster


Summarizing 1 Failure:

[Fail] [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] [It] Should recreate evicted statefulset [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:769

Ran 215 of 4412 Specs in 8464.219 seconds
FAIL! -- 214 Passed | 1 Failed | 0 Pending | 4197 Skipped
--- FAIL: TestE2E (8464.71s)
FAIL