I1216 12:56:21.307772 8 e2e.go:243] Starting e2e run "af1c7816-4fc5-424d-b60a-91f1ced6b809" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1576500980 - Will randomize all specs Will run 215 of 4412 specs Dec 16 12:56:21.515: INFO: >>> kubeConfig: /root/.kube/config Dec 16 12:56:21.520: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Dec 16 12:56:21.560: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Dec 16 12:56:21.597: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Dec 16 12:56:21.597: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Dec 16 12:56:21.597: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Dec 16 12:56:21.605: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Dec 16 12:56:21.605: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Dec 16 12:56:21.605: INFO: e2e test version: v1.15.7 Dec 16 12:56:21.606: INFO: kube-apiserver version: v1.15.1 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 12:56:21.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Dec 16 12:56:22.526: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 16 12:56:22.577: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fab06604-e52d-4fc7-8b89-4925dd02e551" in namespace "projected-3328" to be "success or failure" Dec 16 12:56:22.586: INFO: Pod "downwardapi-volume-fab06604-e52d-4fc7-8b89-4925dd02e551": Phase="Pending", Reason="", readiness=false. Elapsed: 9.07342ms Dec 16 12:56:24.761: INFO: Pod "downwardapi-volume-fab06604-e52d-4fc7-8b89-4925dd02e551": Phase="Pending", Reason="", readiness=false. Elapsed: 2.18381906s Dec 16 12:56:26.767: INFO: Pod "downwardapi-volume-fab06604-e52d-4fc7-8b89-4925dd02e551": Phase="Pending", Reason="", readiness=false. Elapsed: 4.190041343s Dec 16 12:56:28.786: INFO: Pod "downwardapi-volume-fab06604-e52d-4fc7-8b89-4925dd02e551": Phase="Pending", Reason="", readiness=false. Elapsed: 6.208164039s Dec 16 12:56:30.812: INFO: Pod "downwardapi-volume-fab06604-e52d-4fc7-8b89-4925dd02e551": Phase="Pending", Reason="", readiness=false. Elapsed: 8.234762657s Dec 16 12:56:32.833: INFO: Pod "downwardapi-volume-fab06604-e52d-4fc7-8b89-4925dd02e551": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.255347645s STEP: Saw pod success Dec 16 12:56:32.833: INFO: Pod "downwardapi-volume-fab06604-e52d-4fc7-8b89-4925dd02e551" satisfied condition "success or failure" Dec 16 12:56:32.837: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-fab06604-e52d-4fc7-8b89-4925dd02e551 container client-container: STEP: delete the pod Dec 16 12:56:32.929: INFO: Waiting for pod downwardapi-volume-fab06604-e52d-4fc7-8b89-4925dd02e551 to disappear Dec 16 12:56:32.956: INFO: Pod downwardapi-volume-fab06604-e52d-4fc7-8b89-4925dd02e551 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 12:56:32.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3328" for this suite. Dec 16 12:56:38.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 12:56:39.082: INFO: namespace projected-3328 deletion completed in 6.115748115s • [SLOW TEST:17.475 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 12:56:39.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-3ee5b747-27fd-4764-b850-067ca3a910e2 Dec 16 12:56:39.236: INFO: Pod name my-hostname-basic-3ee5b747-27fd-4764-b850-067ca3a910e2: Found 0 pods out of 1 Dec 16 12:56:44.246: INFO: Pod name my-hostname-basic-3ee5b747-27fd-4764-b850-067ca3a910e2: Found 1 pods out of 1 Dec 16 12:56:44.246: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-3ee5b747-27fd-4764-b850-067ca3a910e2" are running Dec 16 12:56:48.258: INFO: Pod "my-hostname-basic-3ee5b747-27fd-4764-b850-067ca3a910e2-v46zk" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-16 12:56:39 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-16 12:56:39 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-3ee5b747-27fd-4764-b850-067ca3a910e2]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-16 12:56:39 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-3ee5b747-27fd-4764-b850-067ca3a910e2]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-16 12:56:39 +0000 UTC Reason: Message:}]) Dec 16 12:56:48.259: INFO: Trying to dial the pod Dec 16 12:56:53.297: INFO: Controller my-hostname-basic-3ee5b747-27fd-4764-b850-067ca3a910e2: Got expected result from replica 1 [my-hostname-basic-3ee5b747-27fd-4764-b850-067ca3a910e2-v46zk]: "my-hostname-basic-3ee5b747-27fd-4764-b850-067ca3a910e2-v46zk", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 12:56:53.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2227" for this suite. Dec 16 12:56:59.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 12:56:59.441: INFO: namespace replication-controller-2227 deletion completed in 6.136441041s • [SLOW TEST:20.359 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 12:56:59.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 16 12:56:59.708: INFO: Waiting up to 5m0s for pod "downwardapi-volume-49ad4ad3-bb4b-4ff6-bfa1-dbf2e92a48a1" in namespace "projected-1494" to be "success or failure" Dec 16 12:56:59.743: INFO: Pod "downwardapi-volume-49ad4ad3-bb4b-4ff6-bfa1-dbf2e92a48a1": Phase="Pending", Reason="", readiness=false. Elapsed: 34.281971ms Dec 16 12:57:01.847: INFO: Pod "downwardapi-volume-49ad4ad3-bb4b-4ff6-bfa1-dbf2e92a48a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138438207s Dec 16 12:57:03.865: INFO: Pod "downwardapi-volume-49ad4ad3-bb4b-4ff6-bfa1-dbf2e92a48a1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156518713s Dec 16 12:57:05.884: INFO: Pod "downwardapi-volume-49ad4ad3-bb4b-4ff6-bfa1-dbf2e92a48a1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.175442976s Dec 16 12:57:07.896: INFO: Pod "downwardapi-volume-49ad4ad3-bb4b-4ff6-bfa1-dbf2e92a48a1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.187074383s Dec 16 12:57:09.903: INFO: Pod "downwardapi-volume-49ad4ad3-bb4b-4ff6-bfa1-dbf2e92a48a1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.194688148s Dec 16 12:57:11.916: INFO: Pod "downwardapi-volume-49ad4ad3-bb4b-4ff6-bfa1-dbf2e92a48a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.207591332s STEP: Saw pod success Dec 16 12:57:11.917: INFO: Pod "downwardapi-volume-49ad4ad3-bb4b-4ff6-bfa1-dbf2e92a48a1" satisfied condition "success or failure" Dec 16 12:57:11.921: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-49ad4ad3-bb4b-4ff6-bfa1-dbf2e92a48a1 container client-container: STEP: delete the pod Dec 16 12:57:12.040: INFO: Waiting for pod downwardapi-volume-49ad4ad3-bb4b-4ff6-bfa1-dbf2e92a48a1 to disappear Dec 16 12:57:12.052: INFO: Pod downwardapi-volume-49ad4ad3-bb4b-4ff6-bfa1-dbf2e92a48a1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 12:57:12.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1494" for this suite. Dec 16 12:57:18.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 12:57:18.491: INFO: namespace projected-1494 deletion completed in 6.421670429s • [SLOW TEST:19.049 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 12:57:18.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Dec 16 12:57:18.660: INFO: Waiting up to 5m0s for pod "pod-8d436d5d-0234-4823-9004-874ddd4265de" in namespace "emptydir-349" to be "success or failure" Dec 16 12:57:18.686: INFO: Pod "pod-8d436d5d-0234-4823-9004-874ddd4265de": Phase="Pending", Reason="", readiness=false. Elapsed: 25.405179ms Dec 16 12:57:20.701: INFO: Pod "pod-8d436d5d-0234-4823-9004-874ddd4265de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040541035s Dec 16 12:57:22.710: INFO: Pod "pod-8d436d5d-0234-4823-9004-874ddd4265de": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049091515s Dec 16 12:57:24.718: INFO: Pod "pod-8d436d5d-0234-4823-9004-874ddd4265de": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057631615s Dec 16 12:57:26.734: INFO: Pod "pod-8d436d5d-0234-4823-9004-874ddd4265de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.073254807s STEP: Saw pod success Dec 16 12:57:26.735: INFO: Pod "pod-8d436d5d-0234-4823-9004-874ddd4265de" satisfied condition "success or failure" Dec 16 12:57:26.751: INFO: Trying to get logs from node iruya-node pod pod-8d436d5d-0234-4823-9004-874ddd4265de container test-container: STEP: delete the pod Dec 16 12:57:26.885: INFO: Waiting for pod pod-8d436d5d-0234-4823-9004-874ddd4265de to disappear Dec 16 12:57:26.893: INFO: Pod pod-8d436d5d-0234-4823-9004-874ddd4265de no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 12:57:26.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-349" for this suite. Dec 16 12:57:34.933: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 12:57:35.041: INFO: namespace emptydir-349 deletion completed in 8.141035798s • [SLOW TEST:16.550 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 12:57:35.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-9ffef908-c9b1-4a32-b06a-284d1103f0ff STEP: Creating a pod to test consume configMaps Dec 16 12:57:35.217: INFO: Waiting up to 5m0s for pod "pod-configmaps-6f3204ef-48f7-47e0-8f77-e3a8ecc94605" in namespace "configmap-9313" to be "success or failure" Dec 16 12:57:35.242: INFO: Pod "pod-configmaps-6f3204ef-48f7-47e0-8f77-e3a8ecc94605": Phase="Pending", Reason="", readiness=false. Elapsed: 25.638276ms Dec 16 12:57:37.249: INFO: Pod "pod-configmaps-6f3204ef-48f7-47e0-8f77-e3a8ecc94605": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032097892s Dec 16 12:57:39.263: INFO: Pod "pod-configmaps-6f3204ef-48f7-47e0-8f77-e3a8ecc94605": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046524987s Dec 16 12:57:41.271: INFO: Pod "pod-configmaps-6f3204ef-48f7-47e0-8f77-e3a8ecc94605": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054629734s Dec 16 12:57:43.285: INFO: Pod "pod-configmaps-6f3204ef-48f7-47e0-8f77-e3a8ecc94605": Phase="Pending", Reason="", readiness=false. Elapsed: 8.068043787s Dec 16 12:57:45.292: INFO: Pod "pod-configmaps-6f3204ef-48f7-47e0-8f77-e3a8ecc94605": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.075493474s STEP: Saw pod success Dec 16 12:57:45.292: INFO: Pod "pod-configmaps-6f3204ef-48f7-47e0-8f77-e3a8ecc94605" satisfied condition "success or failure" Dec 16 12:57:45.297: INFO: Trying to get logs from node iruya-node pod pod-configmaps-6f3204ef-48f7-47e0-8f77-e3a8ecc94605 container configmap-volume-test: STEP: delete the pod Dec 16 12:57:45.420: INFO: Waiting for pod pod-configmaps-6f3204ef-48f7-47e0-8f77-e3a8ecc94605 to disappear Dec 16 12:57:45.427: INFO: Pod pod-configmaps-6f3204ef-48f7-47e0-8f77-e3a8ecc94605 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 12:57:45.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9313" for this suite. Dec 16 12:57:51.459: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 12:57:51.606: INFO: namespace configmap-9313 deletion completed in 6.172665863s • [SLOW TEST:16.564 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 12:57:51.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-8391b1f8-ec28-4670-88bc-c98b3d80d911 STEP: Creating a pod to test consume secrets Dec 16 12:57:51.678: INFO: Waiting up to 5m0s for pod "pod-secrets-9aab2c95-db4a-4a2f-9b64-31860ceab565" in namespace "secrets-6480" to be "success or failure" Dec 16 12:57:51.698: INFO: Pod "pod-secrets-9aab2c95-db4a-4a2f-9b64-31860ceab565": Phase="Pending", Reason="", readiness=false. Elapsed: 20.275021ms Dec 16 12:57:53.714: INFO: Pod "pod-secrets-9aab2c95-db4a-4a2f-9b64-31860ceab565": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036006608s Dec 16 12:57:55.723: INFO: Pod "pod-secrets-9aab2c95-db4a-4a2f-9b64-31860ceab565": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045333616s Dec 16 12:57:57.736: INFO: Pod "pod-secrets-9aab2c95-db4a-4a2f-9b64-31860ceab565": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058224846s Dec 16 12:57:59.742: INFO: Pod "pod-secrets-9aab2c95-db4a-4a2f-9b64-31860ceab565": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063799966s Dec 16 12:58:01.754: INFO: Pod "pod-secrets-9aab2c95-db4a-4a2f-9b64-31860ceab565": Phase="Pending", Reason="", readiness=false. Elapsed: 10.076148628s Dec 16 12:58:03.772: INFO: Pod "pod-secrets-9aab2c95-db4a-4a2f-9b64-31860ceab565": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.093405069s STEP: Saw pod success Dec 16 12:58:03.772: INFO: Pod "pod-secrets-9aab2c95-db4a-4a2f-9b64-31860ceab565" satisfied condition "success or failure" Dec 16 12:58:03.788: INFO: Trying to get logs from node iruya-node pod pod-secrets-9aab2c95-db4a-4a2f-9b64-31860ceab565 container secret-volume-test: STEP: delete the pod Dec 16 12:58:04.998: INFO: Waiting for pod pod-secrets-9aab2c95-db4a-4a2f-9b64-31860ceab565 to disappear Dec 16 12:58:05.023: INFO: Pod pod-secrets-9aab2c95-db4a-4a2f-9b64-31860ceab565 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 12:58:05.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6480" for this suite. Dec 16 12:58:11.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 12:58:11.280: INFO: namespace secrets-6480 deletion completed in 6.210665662s • [SLOW TEST:19.673 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 12:58:11.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Dec 16 12:58:11.359: INFO: Waiting up to 5m0s for pod "downward-api-67712e61-58c4-4724-947e-1ac9bc9f33b5" in namespace "downward-api-2337" to be "success or failure" Dec 16 12:58:11.390: INFO: Pod "downward-api-67712e61-58c4-4724-947e-1ac9bc9f33b5": Phase="Pending", Reason="", readiness=false. Elapsed: 31.286745ms Dec 16 12:58:13.406: INFO: Pod "downward-api-67712e61-58c4-4724-947e-1ac9bc9f33b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046640554s Dec 16 12:58:15.415: INFO: Pod "downward-api-67712e61-58c4-4724-947e-1ac9bc9f33b5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055765225s Dec 16 12:58:17.424: INFO: Pod "downward-api-67712e61-58c4-4724-947e-1ac9bc9f33b5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064408663s Dec 16 12:58:19.431: INFO: Pod "downward-api-67712e61-58c4-4724-947e-1ac9bc9f33b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.072278833s STEP: Saw pod success Dec 16 12:58:19.432: INFO: Pod "downward-api-67712e61-58c4-4724-947e-1ac9bc9f33b5" satisfied condition "success or failure" Dec 16 12:58:19.435: INFO: Trying to get logs from node iruya-node pod downward-api-67712e61-58c4-4724-947e-1ac9bc9f33b5 container dapi-container: STEP: delete the pod Dec 16 12:58:19.547: INFO: Waiting for pod downward-api-67712e61-58c4-4724-947e-1ac9bc9f33b5 to disappear Dec 16 12:58:19.552: INFO: Pod downward-api-67712e61-58c4-4724-947e-1ac9bc9f33b5 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 12:58:19.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2337" for this suite. Dec 16 12:58:25.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 12:58:25.674: INFO: namespace downward-api-2337 deletion completed in 6.115684863s • [SLOW TEST:14.394 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 12:58:25.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 16 12:58:25.869: INFO: Create a RollingUpdate DaemonSet Dec 16 12:58:25.881: INFO: Check that daemon pods launch on every node of the cluster Dec 16 12:58:25.897: INFO: Number of nodes with available pods: 0 Dec 16 12:58:25.897: INFO: Node iruya-node is running more than one daemon pod Dec 16 12:58:26.934: INFO: Number of nodes with available pods: 0 Dec 16 12:58:26.934: INFO: Node iruya-node is running more than one daemon pod Dec 16 12:58:28.389: INFO: Number of nodes with available pods: 0 Dec 16 12:58:28.389: INFO: Node iruya-node is running more than one daemon pod Dec 16 12:58:28.916: INFO: Number of nodes with available pods: 0 Dec 16 12:58:28.916: INFO: Node iruya-node is running more than one daemon pod Dec 16 12:58:30.002: INFO: Number of nodes with available pods: 0 Dec 16 12:58:30.002: INFO: Node iruya-node is running more than one daemon pod Dec 16 12:58:30.939: INFO: Number of nodes with available pods: 0 Dec 16 12:58:30.939: INFO: Node iruya-node is running more than one daemon pod Dec 16 12:58:33.878: INFO: Number of nodes with available pods: 0 Dec 16 12:58:33.878: INFO: Node iruya-node is running more than one daemon pod Dec 16 12:58:34.228: INFO: Number of nodes with available pods: 0 Dec 16 12:58:34.228: INFO: Node iruya-node is running more than one daemon pod Dec 16 12:58:35.339: INFO: Number of nodes with available pods: 0 Dec 16 12:58:35.339: INFO: Node iruya-node is running more than one daemon pod Dec 16 12:58:35.949: INFO: Number of nodes with available pods: 1 Dec 16 12:58:35.950: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 16 12:58:36.928: INFO: Number of nodes with available pods: 1 Dec 16 12:58:36.928: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 16 12:58:37.921: INFO: Number of nodes with available pods: 2 Dec 16 12:58:37.921: INFO: Number of running nodes: 2, number of available pods: 2 Dec 16 12:58:37.921: INFO: Update the DaemonSet to trigger a rollout Dec 16 12:58:37.934: INFO: Updating DaemonSet daemon-set Dec 16 12:58:48.168: INFO: Roll back the DaemonSet before rollout is complete Dec 16 12:58:48.184: INFO: Updating DaemonSet daemon-set Dec 16 12:58:48.184: INFO: Make sure DaemonSet rollback is complete Dec 16 12:58:48.202: INFO: Wrong image for pod: daemon-set-6vz69. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Dec 16 12:58:48.202: INFO: Pod daemon-set-6vz69 is not available Dec 16 12:58:49.980: INFO: Wrong image for pod: daemon-set-6vz69. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Dec 16 12:58:49.980: INFO: Pod daemon-set-6vz69 is not available Dec 16 12:58:51.223: INFO: Wrong image for pod: daemon-set-6vz69. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Dec 16 12:58:51.223: INFO: Pod daemon-set-6vz69 is not available Dec 16 12:58:52.221: INFO: Pod daemon-set-vr2zf is not available [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4953, will wait for the garbage collector to delete the pods Dec 16 12:58:52.337: INFO: Deleting DaemonSet.extensions daemon-set took: 11.266333ms Dec 16 12:58:53.638: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.300848002s Dec 16 12:59:03.245: INFO: Number of nodes with available pods: 0 Dec 16 12:59:03.246: INFO: Number of running nodes: 0, number of available pods: 0 Dec 16 12:59:03.253: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4953/daemonsets","resourceVersion":"16883606"},"items":null} Dec 16 12:59:03.256: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4953/pods","resourceVersion":"16883606"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 12:59:03.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4953" for this suite. Dec 16 12:59:09.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 12:59:09.461: INFO: namespace daemonsets-4953 deletion completed in 6.173983689s • [SLOW TEST:43.787 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 12:59:09.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Dec 16 12:59:09.541: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 12:59:24.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8046" for this suite. Dec 16 12:59:30.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 12:59:30.215: INFO: namespace init-container-8046 deletion completed in 6.116993897s • [SLOW TEST:20.752 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 12:59:30.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Dec 16 12:59:40.569: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 12:59:40.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2018" for this suite. Dec 16 12:59:46.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 12:59:47.001: INFO: namespace container-runtime-2018 deletion completed in 6.218669752s • [SLOW TEST:16.786 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 12:59:47.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Dec 16 12:59:47.110: INFO: Waiting up to 5m0s for pod "downward-api-828c82aa-ff4f-43b3-9354-cd01739b6a72" in namespace "downward-api-4326" to be "success or failure" Dec 16 12:59:47.126: INFO: Pod "downward-api-828c82aa-ff4f-43b3-9354-cd01739b6a72": Phase="Pending", Reason="", readiness=false. Elapsed: 16.003169ms Dec 16 12:59:49.154: INFO: Pod "downward-api-828c82aa-ff4f-43b3-9354-cd01739b6a72": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043361957s Dec 16 12:59:51.162: INFO: Pod "downward-api-828c82aa-ff4f-43b3-9354-cd01739b6a72": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051506123s Dec 16 12:59:53.206: INFO: Pod "downward-api-828c82aa-ff4f-43b3-9354-cd01739b6a72": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095336513s Dec 16 12:59:55.224: INFO: Pod "downward-api-828c82aa-ff4f-43b3-9354-cd01739b6a72": Phase="Pending", Reason="", readiness=false. Elapsed: 8.113402586s Dec 16 12:59:57.233: INFO: Pod "downward-api-828c82aa-ff4f-43b3-9354-cd01739b6a72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.122940392s STEP: Saw pod success Dec 16 12:59:57.233: INFO: Pod "downward-api-828c82aa-ff4f-43b3-9354-cd01739b6a72" satisfied condition "success or failure" Dec 16 12:59:57.239: INFO: Trying to get logs from node iruya-node pod downward-api-828c82aa-ff4f-43b3-9354-cd01739b6a72 container dapi-container: STEP: delete the pod Dec 16 12:59:57.287: INFO: Waiting for pod downward-api-828c82aa-ff4f-43b3-9354-cd01739b6a72 to disappear Dec 16 12:59:57.296: INFO: Pod downward-api-828c82aa-ff4f-43b3-9354-cd01739b6a72 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 12:59:57.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4326" for this suite. Dec 16 13:00:03.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:00:03.460: INFO: namespace downward-api-4326 deletion completed in 6.154460144s • [SLOW TEST:16.459 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:00:03.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-a56f88d0-3691-4f9c-b8ed-66ab32bffeaf STEP: Creating a pod to test consume configMaps Dec 16 13:00:03.639: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ed142e9f-31dd-4e41-a903-a48dd097b1b3" in namespace "projected-2978" to be "success or failure" Dec 16 13:00:03.745: INFO: Pod "pod-projected-configmaps-ed142e9f-31dd-4e41-a903-a48dd097b1b3": Phase="Pending", Reason="", readiness=false. Elapsed: 105.006646ms Dec 16 13:00:05.750: INFO: Pod "pod-projected-configmaps-ed142e9f-31dd-4e41-a903-a48dd097b1b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110205203s Dec 16 13:00:07.758: INFO: Pod "pod-projected-configmaps-ed142e9f-31dd-4e41-a903-a48dd097b1b3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118484782s Dec 16 13:00:09.765: INFO: Pod "pod-projected-configmaps-ed142e9f-31dd-4e41-a903-a48dd097b1b3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.125324697s Dec 16 13:00:11.797: INFO: Pod "pod-projected-configmaps-ed142e9f-31dd-4e41-a903-a48dd097b1b3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.156987503s Dec 16 13:00:15.218: INFO: Pod "pod-projected-configmaps-ed142e9f-31dd-4e41-a903-a48dd097b1b3": Phase="Running", Reason="", readiness=true. Elapsed: 11.578496865s Dec 16 13:00:17.228: INFO: Pod "pod-projected-configmaps-ed142e9f-31dd-4e41-a903-a48dd097b1b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.588031939s STEP: Saw pod success Dec 16 13:00:17.228: INFO: Pod "pod-projected-configmaps-ed142e9f-31dd-4e41-a903-a48dd097b1b3" satisfied condition "success or failure" Dec 16 13:00:17.236: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-ed142e9f-31dd-4e41-a903-a48dd097b1b3 container projected-configmap-volume-test: STEP: delete the pod Dec 16 13:00:17.671: INFO: Waiting for pod pod-projected-configmaps-ed142e9f-31dd-4e41-a903-a48dd097b1b3 to disappear Dec 16 13:00:17.686: INFO: Pod pod-projected-configmaps-ed142e9f-31dd-4e41-a903-a48dd097b1b3 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:00:17.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2978" for this suite. Dec 16 13:00:23.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:00:23.917: INFO: namespace projected-2978 deletion completed in 6.222270186s • [SLOW TEST:20.457 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:00:23.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Dec 16 13:00:34.657: INFO: Successfully updated pod "labelsupdatee3df29c4-4cf7-4ba4-9ecc-d644d14aeec2" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:00:36.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4668" for this suite. Dec 16 13:00:58.974: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:00:59.117: INFO: namespace projected-4668 deletion completed in 22.339783648s • [SLOW TEST:35.198 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:00:59.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 16 13:00:59.299: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Dec 16 13:01:04.310: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Dec 16 13:01:06.333: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Dec 16 13:01:06.431: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-1375,SelfLink:/apis/apps/v1/namespaces/deployment-1375/deployments/test-cleanup-deployment,UID:e3ac861a-f1d8-4693-acbf-c29124b999c3,ResourceVersion:16883938,Generation:1,CreationTimestamp:2019-12-16 13:01:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Dec 16 13:01:06.434: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. Dec 16 13:01:06.434: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Dec 16 13:01:06.435: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-1375,SelfLink:/apis/apps/v1/namespaces/deployment-1375/replicasets/test-cleanup-controller,UID:42b1991a-386c-4e13-8e89-0e429f9d53fb,ResourceVersion:16883939,Generation:1,CreationTimestamp:2019-12-16 13:00:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment e3ac861a-f1d8-4693-acbf-c29124b999c3 0xc002f26ea7 0xc002f26ea8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Dec 16 13:01:06.444: INFO: Pod "test-cleanup-controller-w58bn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-w58bn,GenerateName:test-cleanup-controller-,Namespace:deployment-1375,SelfLink:/api/v1/namespaces/deployment-1375/pods/test-cleanup-controller-w58bn,UID:3a211325-833b-4f94-88e9-ec4e67bd4a17,ResourceVersion:16883936,Generation:0,CreationTimestamp:2019-12-16 13:00:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 42b1991a-386c-4e13-8e89-0e429f9d53fb 0xc002f27417 0xc002f27418}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cbfkq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cbfkq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cbfkq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f27490} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f274b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 13:00:59 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 13:01:06 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 13:01:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 13:00:59 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2019-12-16 13:00:59 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-16 13:01:05 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://811dbb8d7d189980af6fdf848fa93b667375c7484dfbb77388c67e3604e0c7c3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:01:06.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1375" for this suite. Dec 16 13:01:14.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:01:14.805: INFO: namespace deployment-1375 deletion completed in 8.309857907s • [SLOW TEST:15.688 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:01:14.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Dec 16 13:01:14.898: INFO: namespace kubectl-7917 Dec 16 13:01:14.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7917' Dec 16 13:01:17.936: INFO: stderr: "" Dec 16 13:01:17.936: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Dec 16 13:01:18.965: INFO: Selector matched 1 pods for map[app:redis] Dec 16 13:01:18.965: INFO: Found 0 / 1 Dec 16 13:01:19.949: INFO: Selector matched 1 pods for map[app:redis] Dec 16 13:01:19.949: INFO: Found 0 / 1 Dec 16 13:01:20.947: INFO: Selector matched 1 pods for map[app:redis] Dec 16 13:01:20.948: INFO: Found 0 / 1 Dec 16 13:01:21.949: INFO: Selector matched 1 pods for map[app:redis] Dec 16 13:01:21.949: INFO: Found 0 / 1 Dec 16 13:01:22.947: INFO: Selector matched 1 pods for map[app:redis] Dec 16 13:01:22.947: INFO: Found 0 / 1 Dec 16 13:01:24.036: INFO: Selector matched 1 pods for map[app:redis] Dec 16 13:01:24.036: INFO: Found 0 / 1 Dec 16 13:01:24.964: INFO: Selector matched 1 pods for map[app:redis] Dec 16 13:01:24.964: INFO: Found 0 / 1 Dec 16 13:01:25.953: INFO: Selector matched 1 pods for map[app:redis] Dec 16 13:01:25.953: INFO: Found 0 / 1 Dec 16 13:01:26.946: INFO: Selector matched 1 pods for map[app:redis] Dec 16 13:01:26.947: INFO: Found 1 / 1 Dec 16 13:01:26.947: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Dec 16 13:01:26.954: INFO: Selector matched 1 pods for map[app:redis] Dec 16 13:01:26.954: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Dec 16 13:01:26.954: INFO: wait on redis-master startup in kubectl-7917 Dec 16 13:01:26.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-vcbnt redis-master --namespace=kubectl-7917' Dec 16 13:01:27.094: INFO: stderr: "" Dec 16 13:01:27.094: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 16 Dec 13:01:25.184 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 16 Dec 13:01:25.184 # Server started, Redis version 3.2.12\n1:M 16 Dec 13:01:25.185 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 16 Dec 13:01:25.185 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Dec 16 13:01:27.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-7917' Dec 16 13:01:27.352: INFO: stderr: "" Dec 16 13:01:27.352: INFO: stdout: "service/rm2 exposed\n" Dec 16 13:01:27.402: INFO: Service rm2 in namespace kubectl-7917 found. STEP: exposing service Dec 16 13:01:29.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-7917' Dec 16 13:01:29.607: INFO: stderr: "" Dec 16 13:01:29.607: INFO: stdout: "service/rm3 exposed\n" Dec 16 13:01:29.657: INFO: Service rm3 in namespace kubectl-7917 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:01:31.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7917" for this suite. Dec 16 13:01:53.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:01:53.880: INFO: namespace kubectl-7917 deletion completed in 22.204292056s • [SLOW TEST:39.074 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:01:53.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Dec 16 13:01:53.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-359' Dec 16 13:01:55.049: INFO: stderr: "" Dec 16 13:01:55.049: INFO: stdout: "pod/pause created\n" Dec 16 13:01:55.050: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Dec 16 13:01:55.050: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-359" to be "running and ready" Dec 16 13:01:55.181: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 130.993826ms Dec 16 13:01:57.192: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142203541s Dec 16 13:01:59.215: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.164606008s Dec 16 13:02:01.224: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.17413231s Dec 16 13:02:03.233: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.183048217s Dec 16 13:02:03.233: INFO: Pod "pause" satisfied condition "running and ready" Dec 16 13:02:03.233: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Dec 16 13:02:03.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-359' Dec 16 13:02:03.459: INFO: stderr: "" Dec 16 13:02:03.459: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Dec 16 13:02:03.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-359' Dec 16 13:02:03.566: INFO: stderr: "" Dec 16 13:02:03.566: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 9s testing-label-value\n" STEP: removing the label testing-label of a pod Dec 16 13:02:03.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-359' Dec 16 13:02:03.761: INFO: stderr: "" Dec 16 13:02:03.761: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Dec 16 13:02:03.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-359' Dec 16 13:02:03.966: INFO: stderr: "" Dec 16 13:02:03.966: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 9s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Dec 16 13:02:03.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-359' Dec 16 13:02:04.113: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 16 13:02:04.113: INFO: stdout: "pod \"pause\" force deleted\n" Dec 16 13:02:04.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-359' Dec 16 13:02:04.258: INFO: stderr: "No resources found.\n" Dec 16 13:02:04.258: INFO: stdout: "" Dec 16 13:02:04.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-359 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 16 13:02:04.371: INFO: stderr: "" Dec 16 13:02:04.372: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:02:04.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-359" for this suite. Dec 16 13:02:10.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:02:10.571: INFO: namespace kubectl-359 deletion completed in 6.189963898s • [SLOW TEST:16.690 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:02:10.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Dec 16 13:02:10.796: INFO: Waiting up to 5m0s for pod "pod-399451bd-6710-47aa-823a-1eeab969ed72" in namespace "emptydir-614" to be "success or failure" Dec 16 13:02:10.811: INFO: Pod "pod-399451bd-6710-47aa-823a-1eeab969ed72": Phase="Pending", Reason="", readiness=false. Elapsed: 14.419267ms Dec 16 13:02:12.847: INFO: Pod "pod-399451bd-6710-47aa-823a-1eeab969ed72": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050767994s Dec 16 13:02:14.868: INFO: Pod "pod-399451bd-6710-47aa-823a-1eeab969ed72": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071378941s Dec 16 13:02:16.884: INFO: Pod "pod-399451bd-6710-47aa-823a-1eeab969ed72": Phase="Pending", Reason="", readiness=false. Elapsed: 6.087851134s Dec 16 13:02:18.895: INFO: Pod "pod-399451bd-6710-47aa-823a-1eeab969ed72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.098514761s STEP: Saw pod success Dec 16 13:02:18.895: INFO: Pod "pod-399451bd-6710-47aa-823a-1eeab969ed72" satisfied condition "success or failure" Dec 16 13:02:18.899: INFO: Trying to get logs from node iruya-node pod pod-399451bd-6710-47aa-823a-1eeab969ed72 container test-container: STEP: delete the pod Dec 16 13:02:19.044: INFO: Waiting for pod pod-399451bd-6710-47aa-823a-1eeab969ed72 to disappear Dec 16 13:02:19.093: INFO: Pod pod-399451bd-6710-47aa-823a-1eeab969ed72 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:02:19.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-614" for this suite. Dec 16 13:02:25.137: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:02:25.242: INFO: namespace emptydir-614 deletion completed in 6.13576569s • [SLOW TEST:14.670 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:02:25.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Dec 16 13:02:34.649: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:02:34.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1507" for this suite. Dec 16 13:02:40.773: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:02:40.884: INFO: namespace container-runtime-1507 deletion completed in 6.164091678s • [SLOW TEST:15.641 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:02:40.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-7f0e545b-24a6-4ea8-9615-b22cae3f8c73 STEP: Creating a pod to test consume secrets Dec 16 13:02:41.428: INFO: Waiting up to 5m0s for pod "pod-secrets-1aa09dec-63f7-40b7-9d09-751dac19c656" in namespace "secrets-7268" to be "success or failure" Dec 16 13:02:41.434: INFO: Pod "pod-secrets-1aa09dec-63f7-40b7-9d09-751dac19c656": Phase="Pending", Reason="", readiness=false. Elapsed: 5.480303ms Dec 16 13:02:43.444: INFO: Pod "pod-secrets-1aa09dec-63f7-40b7-9d09-751dac19c656": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016079511s Dec 16 13:02:45.458: INFO: Pod "pod-secrets-1aa09dec-63f7-40b7-9d09-751dac19c656": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029578983s Dec 16 13:02:47.464: INFO: Pod "pod-secrets-1aa09dec-63f7-40b7-9d09-751dac19c656": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035109374s Dec 16 13:02:49.473: INFO: Pod "pod-secrets-1aa09dec-63f7-40b7-9d09-751dac19c656": Phase="Pending", Reason="", readiness=false. Elapsed: 8.044261349s Dec 16 13:02:51.487: INFO: Pod "pod-secrets-1aa09dec-63f7-40b7-9d09-751dac19c656": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.058495104s STEP: Saw pod success Dec 16 13:02:51.487: INFO: Pod "pod-secrets-1aa09dec-63f7-40b7-9d09-751dac19c656" satisfied condition "success or failure" Dec 16 13:02:51.493: INFO: Trying to get logs from node iruya-node pod pod-secrets-1aa09dec-63f7-40b7-9d09-751dac19c656 container secret-volume-test: STEP: delete the pod Dec 16 13:02:51.564: INFO: Waiting for pod pod-secrets-1aa09dec-63f7-40b7-9d09-751dac19c656 to disappear Dec 16 13:02:51.569: INFO: Pod pod-secrets-1aa09dec-63f7-40b7-9d09-751dac19c656 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:02:51.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7268" for this suite. Dec 16 13:02:57.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:02:57.872: INFO: namespace secrets-7268 deletion completed in 6.296263594s STEP: Destroying namespace "secret-namespace-6647" for this suite. Dec 16 13:03:05.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:03:06.094: INFO: namespace secret-namespace-6647 deletion completed in 8.22211782s • [SLOW TEST:25.209 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:03:06.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-3cb2bd46-fc6a-4abf-a6d9-bc9945302a5d STEP: Creating a pod to test consume configMaps Dec 16 13:03:06.206: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6e383a3f-b6a5-43f3-8c75-32be253321e8" in namespace "projected-1257" to be "success or failure" Dec 16 13:03:06.213: INFO: Pod "pod-projected-configmaps-6e383a3f-b6a5-43f3-8c75-32be253321e8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.663046ms Dec 16 13:03:08.246: INFO: Pod "pod-projected-configmaps-6e383a3f-b6a5-43f3-8c75-32be253321e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039270312s Dec 16 13:03:10.262: INFO: Pod "pod-projected-configmaps-6e383a3f-b6a5-43f3-8c75-32be253321e8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055393777s Dec 16 13:03:12.269: INFO: Pod "pod-projected-configmaps-6e383a3f-b6a5-43f3-8c75-32be253321e8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062143434s Dec 16 13:03:14.294: INFO: Pod "pod-projected-configmaps-6e383a3f-b6a5-43f3-8c75-32be253321e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.087448036s STEP: Saw pod success Dec 16 13:03:14.295: INFO: Pod "pod-projected-configmaps-6e383a3f-b6a5-43f3-8c75-32be253321e8" satisfied condition "success or failure" Dec 16 13:03:14.301: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-6e383a3f-b6a5-43f3-8c75-32be253321e8 container projected-configmap-volume-test: STEP: delete the pod Dec 16 13:03:14.405: INFO: Waiting for pod pod-projected-configmaps-6e383a3f-b6a5-43f3-8c75-32be253321e8 to disappear Dec 16 13:03:14.411: INFO: Pod pod-projected-configmaps-6e383a3f-b6a5-43f3-8c75-32be253321e8 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:03:14.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1257" for this suite. Dec 16 13:03:20.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:03:20.580: INFO: namespace projected-1257 deletion completed in 6.157070135s • [SLOW TEST:14.485 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:03:20.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-21554434-ff77-4ac6-91dc-0611b17e1fe7 in namespace container-probe-4967 Dec 16 13:03:28.799: INFO: Started pod liveness-21554434-ff77-4ac6-91dc-0611b17e1fe7 in namespace container-probe-4967 STEP: checking the pod's current state and verifying that restartCount is present Dec 16 13:03:28.806: INFO: Initial restart count of pod liveness-21554434-ff77-4ac6-91dc-0611b17e1fe7 is 0 Dec 16 13:03:50.926: INFO: Restart count of pod container-probe-4967/liveness-21554434-ff77-4ac6-91dc-0611b17e1fe7 is now 1 (22.11988178s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:03:50.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4967" for this suite. Dec 16 13:03:57.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:03:57.139: INFO: namespace container-probe-4967 deletion completed in 6.136193002s • [SLOW TEST:36.558 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:03:57.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-871 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-871 to expose endpoints map[] Dec 16 13:03:57.388: INFO: Get endpoints failed (16.099703ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Dec 16 13:03:58.398: INFO: successfully validated that service multi-endpoint-test in namespace services-871 exposes endpoints map[] (1.025914852s elapsed) STEP: Creating pod pod1 in namespace services-871 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-871 to expose endpoints map[pod1:[100]] Dec 16 13:04:02.639: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.217383254s elapsed, will retry) Dec 16 13:04:07.736: INFO: successfully validated that service multi-endpoint-test in namespace services-871 exposes endpoints map[pod1:[100]] (9.314297444s elapsed) STEP: Creating pod pod2 in namespace services-871 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-871 to expose endpoints map[pod1:[100] pod2:[101]] Dec 16 13:04:12.487: INFO: Unexpected endpoints: found map[bae5a1d3-ad95-4d41-b720-27fe3179d7de:[100]], expected map[pod1:[100] pod2:[101]] (4.731967329s elapsed, will retry) Dec 16 13:04:15.553: INFO: successfully validated that service multi-endpoint-test in namespace services-871 exposes endpoints map[pod1:[100] pod2:[101]] (7.798134918s elapsed) STEP: Deleting pod pod1 in namespace services-871 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-871 to expose endpoints map[pod2:[101]] Dec 16 13:04:16.634: INFO: successfully validated that service multi-endpoint-test in namespace services-871 exposes endpoints map[pod2:[101]] (1.067053834s elapsed) STEP: Deleting pod pod2 in namespace services-871 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-871 to expose endpoints map[] Dec 16 13:04:18.149: INFO: successfully validated that service multi-endpoint-test in namespace services-871 exposes endpoints map[] (1.501045735s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:04:19.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-871" for this suite. Dec 16 13:04:41.362: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:04:41.533: INFO: namespace services-871 deletion completed in 22.223232412s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:44.394 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:04:41.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Dec 16 13:04:50.261: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-283 pod-service-account-5de65c39-38fd-4840-b55c-468e427fad13 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Dec 16 13:04:50.912: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-283 pod-service-account-5de65c39-38fd-4840-b55c-468e427fad13 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Dec 16 13:04:51.467: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-283 pod-service-account-5de65c39-38fd-4840-b55c-468e427fad13 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:04:51.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-283" for this suite. Dec 16 13:04:58.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:04:58.187: INFO: namespace svcaccounts-283 deletion completed in 6.201896806s • [SLOW TEST:16.653 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:04:58.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-1165 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-1165 STEP: Deleting pre-stop pod Dec 16 13:05:25.590: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:05:25.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-1165" for this suite. Dec 16 13:06:07.663: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:06:07.821: INFO: namespace prestop-1165 deletion completed in 42.179822891s • [SLOW TEST:69.634 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:06:07.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Dec 16 13:06:08.005: INFO: Waiting up to 5m0s for pod "pod-ee1f1e41-1c1e-4f99-b7bf-879e00fd17f5" in namespace "emptydir-4746" to be "success or failure" Dec 16 13:06:08.017: INFO: Pod "pod-ee1f1e41-1c1e-4f99-b7bf-879e00fd17f5": Phase="Pending", Reason="", readiness=false. Elapsed: 11.637941ms Dec 16 13:06:10.023: INFO: Pod "pod-ee1f1e41-1c1e-4f99-b7bf-879e00fd17f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017738775s Dec 16 13:06:12.032: INFO: Pod "pod-ee1f1e41-1c1e-4f99-b7bf-879e00fd17f5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02678744s Dec 16 13:06:14.046: INFO: Pod "pod-ee1f1e41-1c1e-4f99-b7bf-879e00fd17f5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040877992s Dec 16 13:06:16.053: INFO: Pod "pod-ee1f1e41-1c1e-4f99-b7bf-879e00fd17f5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.048328979s Dec 16 13:06:18.063: INFO: Pod "pod-ee1f1e41-1c1e-4f99-b7bf-879e00fd17f5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.057962857s Dec 16 13:06:20.072: INFO: Pod "pod-ee1f1e41-1c1e-4f99-b7bf-879e00fd17f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.06680933s STEP: Saw pod success Dec 16 13:06:20.072: INFO: Pod "pod-ee1f1e41-1c1e-4f99-b7bf-879e00fd17f5" satisfied condition "success or failure" Dec 16 13:06:20.075: INFO: Trying to get logs from node iruya-node pod pod-ee1f1e41-1c1e-4f99-b7bf-879e00fd17f5 container test-container: STEP: delete the pod Dec 16 13:06:20.202: INFO: Waiting for pod pod-ee1f1e41-1c1e-4f99-b7bf-879e00fd17f5 to disappear Dec 16 13:06:20.212: INFO: Pod pod-ee1f1e41-1c1e-4f99-b7bf-879e00fd17f5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:06:20.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4746" for this suite. Dec 16 13:06:26.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:06:26.353: INFO: namespace emptydir-4746 deletion completed in 6.13601336s • [SLOW TEST:18.529 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:06:26.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Dec 16 13:06:38.248: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:06:39.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6285" for this suite. Dec 16 13:07:03.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:07:03.526: INFO: namespace replicaset-6285 deletion completed in 24.174566562s • [SLOW TEST:37.173 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:07:03.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Dec 16 13:07:11.986: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:07:12.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3193" for this suite. Dec 16 13:07:18.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:07:18.215: INFO: namespace container-runtime-3193 deletion completed in 6.184280363s • [SLOW TEST:14.687 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:07:18.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Dec 16 13:07:18.297: INFO: Waiting up to 5m0s for pod "client-containers-b135c125-09ae-479e-ba0f-7341d1e823aa" in namespace "containers-7822" to be "success or failure" Dec 16 13:07:18.359: INFO: Pod "client-containers-b135c125-09ae-479e-ba0f-7341d1e823aa": Phase="Pending", Reason="", readiness=false. Elapsed: 62.18522ms Dec 16 13:07:20.373: INFO: Pod "client-containers-b135c125-09ae-479e-ba0f-7341d1e823aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075480992s Dec 16 13:07:22.385: INFO: Pod "client-containers-b135c125-09ae-479e-ba0f-7341d1e823aa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087756956s Dec 16 13:07:24.394: INFO: Pod "client-containers-b135c125-09ae-479e-ba0f-7341d1e823aa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096865642s Dec 16 13:07:26.401: INFO: Pod "client-containers-b135c125-09ae-479e-ba0f-7341d1e823aa": Phase="Pending", Reason="", readiness=false. Elapsed: 8.10351205s Dec 16 13:07:28.406: INFO: Pod "client-containers-b135c125-09ae-479e-ba0f-7341d1e823aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.108972932s STEP: Saw pod success Dec 16 13:07:28.406: INFO: Pod "client-containers-b135c125-09ae-479e-ba0f-7341d1e823aa" satisfied condition "success or failure" Dec 16 13:07:28.409: INFO: Trying to get logs from node iruya-node pod client-containers-b135c125-09ae-479e-ba0f-7341d1e823aa container test-container: STEP: delete the pod Dec 16 13:07:28.560: INFO: Waiting for pod client-containers-b135c125-09ae-479e-ba0f-7341d1e823aa to disappear Dec 16 13:07:28.597: INFO: Pod client-containers-b135c125-09ae-479e-ba0f-7341d1e823aa no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:07:28.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7822" for this suite. Dec 16 13:07:34.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:07:34.868: INFO: namespace containers-7822 deletion completed in 6.257814946s • [SLOW TEST:16.652 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:07:34.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-gbqb STEP: Creating a pod to test atomic-volume-subpath Dec 16 13:07:35.017: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-gbqb" in namespace "subpath-2271" to be "success or failure" Dec 16 13:07:35.021: INFO: Pod "pod-subpath-test-projected-gbqb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.734342ms Dec 16 13:07:37.031: INFO: Pod "pod-subpath-test-projected-gbqb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013928471s Dec 16 13:07:39.051: INFO: Pod "pod-subpath-test-projected-gbqb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033683922s Dec 16 13:07:41.059: INFO: Pod "pod-subpath-test-projected-gbqb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042272464s Dec 16 13:07:43.066: INFO: Pod "pod-subpath-test-projected-gbqb": Phase="Running", Reason="", readiness=true. Elapsed: 8.049022934s Dec 16 13:07:45.079: INFO: Pod "pod-subpath-test-projected-gbqb": Phase="Running", Reason="", readiness=true. Elapsed: 10.061717172s Dec 16 13:07:47.087: INFO: Pod "pod-subpath-test-projected-gbqb": Phase="Running", Reason="", readiness=true. Elapsed: 12.070195986s Dec 16 13:07:49.095: INFO: Pod "pod-subpath-test-projected-gbqb": Phase="Running", Reason="", readiness=true. Elapsed: 14.078228126s Dec 16 13:07:51.103: INFO: Pod "pod-subpath-test-projected-gbqb": Phase="Running", Reason="", readiness=true. Elapsed: 16.085871775s Dec 16 13:07:53.111: INFO: Pod "pod-subpath-test-projected-gbqb": Phase="Running", Reason="", readiness=true. Elapsed: 18.093968515s Dec 16 13:07:55.120: INFO: Pod "pod-subpath-test-projected-gbqb": Phase="Running", Reason="", readiness=true. Elapsed: 20.102542302s Dec 16 13:07:57.133: INFO: Pod "pod-subpath-test-projected-gbqb": Phase="Running", Reason="", readiness=true. Elapsed: 22.11580875s Dec 16 13:07:59.144: INFO: Pod "pod-subpath-test-projected-gbqb": Phase="Running", Reason="", readiness=true. Elapsed: 24.127409928s Dec 16 13:08:01.162: INFO: Pod "pod-subpath-test-projected-gbqb": Phase="Running", Reason="", readiness=true. Elapsed: 26.14522006s Dec 16 13:08:03.173: INFO: Pod "pod-subpath-test-projected-gbqb": Phase="Running", Reason="", readiness=true. Elapsed: 28.155843273s Dec 16 13:08:05.211: INFO: Pod "pod-subpath-test-projected-gbqb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.194060101s STEP: Saw pod success Dec 16 13:08:05.211: INFO: Pod "pod-subpath-test-projected-gbqb" satisfied condition "success or failure" Dec 16 13:08:05.216: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-gbqb container test-container-subpath-projected-gbqb: STEP: delete the pod Dec 16 13:08:05.338: INFO: Waiting for pod pod-subpath-test-projected-gbqb to disappear Dec 16 13:08:05.343: INFO: Pod pod-subpath-test-projected-gbqb no longer exists STEP: Deleting pod pod-subpath-test-projected-gbqb Dec 16 13:08:05.343: INFO: Deleting pod "pod-subpath-test-projected-gbqb" in namespace "subpath-2271" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:08:05.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2271" for this suite. Dec 16 13:08:11.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:08:11.525: INFO: namespace subpath-2271 deletion completed in 6.17483764s • [SLOW TEST:36.655 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:08:11.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-a02b058f-9529-4684-9edd-0a74d54cbc54 STEP: Creating a pod to test consume configMaps Dec 16 13:08:11.664: INFO: Waiting up to 5m0s for pod "pod-configmaps-f211fc28-c2d3-4936-b447-291ef3907071" in namespace "configmap-4440" to be "success or failure" Dec 16 13:08:11.672: INFO: Pod "pod-configmaps-f211fc28-c2d3-4936-b447-291ef3907071": Phase="Pending", Reason="", readiness=false. Elapsed: 7.949993ms Dec 16 13:08:13.686: INFO: Pod "pod-configmaps-f211fc28-c2d3-4936-b447-291ef3907071": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022240517s Dec 16 13:08:15.839: INFO: Pod "pod-configmaps-f211fc28-c2d3-4936-b447-291ef3907071": Phase="Pending", Reason="", readiness=false. Elapsed: 4.175024382s Dec 16 13:08:17.849: INFO: Pod "pod-configmaps-f211fc28-c2d3-4936-b447-291ef3907071": Phase="Pending", Reason="", readiness=false. Elapsed: 6.18553133s Dec 16 13:08:19.864: INFO: Pod "pod-configmaps-f211fc28-c2d3-4936-b447-291ef3907071": Phase="Pending", Reason="", readiness=false. Elapsed: 8.19983434s Dec 16 13:08:21.882: INFO: Pod "pod-configmaps-f211fc28-c2d3-4936-b447-291ef3907071": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.217717919s STEP: Saw pod success Dec 16 13:08:21.882: INFO: Pod "pod-configmaps-f211fc28-c2d3-4936-b447-291ef3907071" satisfied condition "success or failure" Dec 16 13:08:21.896: INFO: Trying to get logs from node iruya-node pod pod-configmaps-f211fc28-c2d3-4936-b447-291ef3907071 container configmap-volume-test: STEP: delete the pod Dec 16 13:08:22.226: INFO: Waiting for pod pod-configmaps-f211fc28-c2d3-4936-b447-291ef3907071 to disappear Dec 16 13:08:22.232: INFO: Pod pod-configmaps-f211fc28-c2d3-4936-b447-291ef3907071 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:08:22.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4440" for this suite. Dec 16 13:08:28.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:08:28.379: INFO: namespace configmap-4440 deletion completed in 6.142761466s • [SLOW TEST:16.853 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:08:28.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-59e62704-f46f-48a1-a734-5435bfe46fe2 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:08:28.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9577" for this suite. Dec 16 13:08:34.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:08:34.732: INFO: namespace secrets-9577 deletion completed in 6.189185191s • [SLOW TEST:6.352 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:08:34.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 16 13:08:34.888: INFO: Waiting up to 5m0s for pod "downwardapi-volume-61b412d6-134f-48cc-8f75-9ca33c2339dc" in namespace "downward-api-596" to be "success or failure" Dec 16 13:08:34.924: INFO: Pod "downwardapi-volume-61b412d6-134f-48cc-8f75-9ca33c2339dc": Phase="Pending", Reason="", readiness=false. Elapsed: 36.014707ms Dec 16 13:08:36.934: INFO: Pod "downwardapi-volume-61b412d6-134f-48cc-8f75-9ca33c2339dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045483326s Dec 16 13:08:38.971: INFO: Pod "downwardapi-volume-61b412d6-134f-48cc-8f75-9ca33c2339dc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08323147s Dec 16 13:08:40.982: INFO: Pod "downwardapi-volume-61b412d6-134f-48cc-8f75-9ca33c2339dc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094050339s Dec 16 13:08:43.020: INFO: Pod "downwardapi-volume-61b412d6-134f-48cc-8f75-9ca33c2339dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.13171297s STEP: Saw pod success Dec 16 13:08:43.020: INFO: Pod "downwardapi-volume-61b412d6-134f-48cc-8f75-9ca33c2339dc" satisfied condition "success or failure" Dec 16 13:08:43.028: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-61b412d6-134f-48cc-8f75-9ca33c2339dc container client-container: STEP: delete the pod Dec 16 13:08:43.199: INFO: Waiting for pod downwardapi-volume-61b412d6-134f-48cc-8f75-9ca33c2339dc to disappear Dec 16 13:08:43.214: INFO: Pod downwardapi-volume-61b412d6-134f-48cc-8f75-9ca33c2339dc no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:08:43.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-596" for this suite. Dec 16 13:08:49.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:08:49.358: INFO: namespace downward-api-596 deletion completed in 6.138799922s • [SLOW TEST:14.627 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:08:49.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-tk2j2 in namespace proxy-5341 I1216 13:08:49.540277 8 runners.go:180] Created replication controller with name: proxy-service-tk2j2, namespace: proxy-5341, replica count: 1 I1216 13:08:50.591220 8 runners.go:180] proxy-service-tk2j2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1216 13:08:51.592243 8 runners.go:180] proxy-service-tk2j2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1216 13:08:52.593745 8 runners.go:180] proxy-service-tk2j2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1216 13:08:53.594791 8 runners.go:180] proxy-service-tk2j2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1216 13:08:54.595603 8 runners.go:180] proxy-service-tk2j2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1216 13:08:55.597079 8 runners.go:180] proxy-service-tk2j2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1216 13:08:56.598158 8 runners.go:180] proxy-service-tk2j2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1216 13:08:57.599021 8 runners.go:180] proxy-service-tk2j2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1216 13:08:58.600219 8 runners.go:180] proxy-service-tk2j2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1216 13:08:59.601224 8 runners.go:180] proxy-service-tk2j2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1216 13:09:00.601598 8 runners.go:180] proxy-service-tk2j2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1216 13:09:01.601987 8 runners.go:180] proxy-service-tk2j2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1216 13:09:02.602633 8 runners.go:180] proxy-service-tk2j2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1216 13:09:03.603815 8 runners.go:180] proxy-service-tk2j2 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 16 13:09:03.620: INFO: setup took 14.198949091s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Dec 16 13:09:03.688: INFO: (0) /api/v1/namespaces/proxy-5341/services/http:proxy-service-tk2j2:portname1/proxy/: foo (200; 66.214267ms) Dec 16 13:09:03.688: INFO: (0) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:1080/proxy/: test<... (200; 65.507752ms) Dec 16 13:09:03.688: INFO: (0) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb/proxy/: test (200; 67.008436ms) Dec 16 13:09:03.688: INFO: (0) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:1080/proxy/: ... (200; 66.458988ms) Dec 16 13:09:03.690: INFO: (0) /api/v1/namespaces/proxy-5341/services/proxy-service-tk2j2:portname1/proxy/: foo (200; 67.436353ms) Dec 16 13:09:03.690: INFO: (0) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:162/proxy/: bar (200; 67.986679ms) Dec 16 13:09:03.691: INFO: (0) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:162/proxy/: bar (200; 68.494453ms) Dec 16 13:09:03.691: INFO: (0) /api/v1/namespaces/proxy-5341/services/http:proxy-service-tk2j2:portname2/proxy/: bar (200; 69.643519ms) Dec 16 13:09:03.692: INFO: (0) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:160/proxy/: foo (200; 70.176242ms) Dec 16 13:09:03.692: INFO: (0) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:160/proxy/: foo (200; 69.98798ms) Dec 16 13:09:03.692: INFO: (0) /api/v1/namespaces/proxy-5341/services/proxy-service-tk2j2:portname2/proxy/: bar (200; 69.932161ms) Dec 16 13:09:03.703: INFO: (0) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:462/proxy/: tls qux (200; 80.391022ms) Dec 16 13:09:03.703: INFO: (0) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:443/proxy/: test<... (200; 34.878027ms) Dec 16 13:09:03.740: INFO: (1) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:160/proxy/: foo (200; 35.095306ms) Dec 16 13:09:03.740: INFO: (1) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:460/proxy/: tls baz (200; 34.969982ms) Dec 16 13:09:03.741: INFO: (1) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:462/proxy/: tls qux (200; 35.34052ms) Dec 16 13:09:03.741: INFO: (1) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:443/proxy/: test (200; 35.774919ms) Dec 16 13:09:03.743: INFO: (1) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:1080/proxy/: ... (200; 37.758071ms) Dec 16 13:09:03.745: INFO: (1) /api/v1/namespaces/proxy-5341/services/proxy-service-tk2j2:portname1/proxy/: foo (200; 39.308007ms) Dec 16 13:09:03.745: INFO: (1) /api/v1/namespaces/proxy-5341/services/http:proxy-service-tk2j2:portname2/proxy/: bar (200; 40.128261ms) Dec 16 13:09:03.745: INFO: (1) /api/v1/namespaces/proxy-5341/services/proxy-service-tk2j2:portname2/proxy/: bar (200; 40.323162ms) Dec 16 13:09:03.746: INFO: (1) /api/v1/namespaces/proxy-5341/services/http:proxy-service-tk2j2:portname1/proxy/: foo (200; 40.109856ms) Dec 16 13:09:03.746: INFO: (1) /api/v1/namespaces/proxy-5341/services/https:proxy-service-tk2j2:tlsportname2/proxy/: tls qux (200; 40.889887ms) Dec 16 13:09:03.749: INFO: (1) /api/v1/namespaces/proxy-5341/services/https:proxy-service-tk2j2:tlsportname1/proxy/: tls baz (200; 43.772605ms) Dec 16 13:09:03.760: INFO: (2) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb/proxy/: test (200; 10.700486ms) Dec 16 13:09:03.760: INFO: (2) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:162/proxy/: bar (200; 10.478337ms) Dec 16 13:09:03.760: INFO: (2) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:460/proxy/: tls baz (200; 11.184265ms) Dec 16 13:09:03.760: INFO: (2) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:462/proxy/: tls qux (200; 11.152836ms) Dec 16 13:09:03.761: INFO: (2) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:1080/proxy/: test<... (200; 11.493639ms) Dec 16 13:09:03.762: INFO: (2) /api/v1/namespaces/proxy-5341/services/http:proxy-service-tk2j2:portname1/proxy/: foo (200; 13.450924ms) Dec 16 13:09:03.762: INFO: (2) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:162/proxy/: bar (200; 13.141055ms) Dec 16 13:09:03.763: INFO: (2) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:1080/proxy/: ... (200; 13.92384ms) Dec 16 13:09:03.763: INFO: (2) /api/v1/namespaces/proxy-5341/services/http:proxy-service-tk2j2:portname2/proxy/: bar (200; 14.014916ms) Dec 16 13:09:03.763: INFO: (2) /api/v1/namespaces/proxy-5341/services/https:proxy-service-tk2j2:tlsportname1/proxy/: tls baz (200; 14.464149ms) Dec 16 13:09:03.764: INFO: (2) /api/v1/namespaces/proxy-5341/services/https:proxy-service-tk2j2:tlsportname2/proxy/: tls qux (200; 14.43544ms) Dec 16 13:09:03.764: INFO: (2) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:443/proxy/: test<... (200; 18.772683ms) Dec 16 13:09:03.786: INFO: (3) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:160/proxy/: foo (200; 18.880058ms) Dec 16 13:09:03.786: INFO: (3) /api/v1/namespaces/proxy-5341/services/proxy-service-tk2j2:portname2/proxy/: bar (200; 18.780064ms) Dec 16 13:09:03.789: INFO: (3) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:1080/proxy/: ... (200; 21.204438ms) Dec 16 13:09:03.789: INFO: (3) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb/proxy/: test (200; 21.428554ms) Dec 16 13:09:03.789: INFO: (3) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:162/proxy/: bar (200; 21.593662ms) Dec 16 13:09:03.789: INFO: (3) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:443/proxy/: test (200; 10.731949ms) Dec 16 13:09:03.809: INFO: (4) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:160/proxy/: foo (200; 11.489623ms) Dec 16 13:09:03.810: INFO: (4) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:162/proxy/: bar (200; 12.089286ms) Dec 16 13:09:03.810: INFO: (4) /api/v1/namespaces/proxy-5341/services/http:proxy-service-tk2j2:portname2/proxy/: bar (200; 12.426953ms) Dec 16 13:09:03.811: INFO: (4) /api/v1/namespaces/proxy-5341/services/proxy-service-tk2j2:portname2/proxy/: bar (200; 12.880691ms) Dec 16 13:09:03.811: INFO: (4) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:162/proxy/: bar (200; 12.949496ms) Dec 16 13:09:03.811: INFO: (4) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:1080/proxy/: ... (200; 13.390503ms) Dec 16 13:09:03.811: INFO: (4) /api/v1/namespaces/proxy-5341/services/https:proxy-service-tk2j2:tlsportname2/proxy/: tls qux (200; 13.634267ms) Dec 16 13:09:03.811: INFO: (4) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:160/proxy/: foo (200; 13.532123ms) Dec 16 13:09:03.812: INFO: (4) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:460/proxy/: tls baz (200; 13.902672ms) Dec 16 13:09:03.812: INFO: (4) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:1080/proxy/: test<... (200; 13.859856ms) Dec 16 13:09:03.812: INFO: (4) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:443/proxy/: test (200; 21.891621ms) Dec 16 13:09:03.910: INFO: (5) /api/v1/namespaces/proxy-5341/services/http:proxy-service-tk2j2:portname2/proxy/: bar (200; 22.329222ms) Dec 16 13:09:03.911: INFO: (5) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:160/proxy/: foo (200; 22.700253ms) Dec 16 13:09:03.911: INFO: (5) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:460/proxy/: tls baz (200; 22.724933ms) Dec 16 13:09:03.911: INFO: (5) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:160/proxy/: foo (200; 22.360959ms) Dec 16 13:09:03.910: INFO: (5) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:1080/proxy/: test<... (200; 22.269219ms) Dec 16 13:09:03.911: INFO: (5) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:462/proxy/: tls qux (200; 22.802221ms) Dec 16 13:09:03.914: INFO: (5) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:1080/proxy/: ... (200; 26.150863ms) Dec 16 13:09:03.915: INFO: (5) /api/v1/namespaces/proxy-5341/services/proxy-service-tk2j2:portname2/proxy/: bar (200; 26.23754ms) Dec 16 13:09:03.915: INFO: (5) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:162/proxy/: bar (200; 26.617752ms) Dec 16 13:09:03.915: INFO: (5) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:443/proxy/: test<... (200; 25.857456ms) Dec 16 13:09:03.949: INFO: (6) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:462/proxy/: tls qux (200; 29.221973ms) Dec 16 13:09:03.949: INFO: (6) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:160/proxy/: foo (200; 28.744762ms) Dec 16 13:09:03.949: INFO: (6) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:443/proxy/: test (200; 28.883119ms) Dec 16 13:09:03.949: INFO: (6) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:1080/proxy/: ... (200; 28.834981ms) Dec 16 13:09:03.949: INFO: (6) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:160/proxy/: foo (200; 28.968821ms) Dec 16 13:09:03.949: INFO: (6) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:162/proxy/: bar (200; 28.998729ms) Dec 16 13:09:03.949: INFO: (6) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:460/proxy/: tls baz (200; 28.977144ms) Dec 16 13:09:03.949: INFO: (6) /api/v1/namespaces/proxy-5341/services/https:proxy-service-tk2j2:tlsportname1/proxy/: tls baz (200; 29.0195ms) Dec 16 13:09:03.949: INFO: (6) /api/v1/namespaces/proxy-5341/services/https:proxy-service-tk2j2:tlsportname2/proxy/: tls qux (200; 29.269371ms) Dec 16 13:09:03.949: INFO: (6) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:162/proxy/: bar (200; 29.177149ms) Dec 16 13:09:03.949: INFO: (6) /api/v1/namespaces/proxy-5341/services/proxy-service-tk2j2:portname2/proxy/: bar (200; 29.405753ms) Dec 16 13:09:03.950: INFO: (6) /api/v1/namespaces/proxy-5341/services/proxy-service-tk2j2:portname1/proxy/: foo (200; 29.472953ms) Dec 16 13:09:03.950: INFO: (6) /api/v1/namespaces/proxy-5341/services/http:proxy-service-tk2j2:portname2/proxy/: bar (200; 29.544644ms) Dec 16 13:09:03.951: INFO: (6) /api/v1/namespaces/proxy-5341/services/http:proxy-service-tk2j2:portname1/proxy/: foo (200; 31.32109ms) Dec 16 13:09:03.961: INFO: (7) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:160/proxy/: foo (200; 9.33739ms) Dec 16 13:09:03.961: INFO: (7) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:162/proxy/: bar (200; 9.214808ms) Dec 16 13:09:03.961: INFO: (7) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:443/proxy/: test (200; 17.019623ms) Dec 16 13:09:03.969: INFO: (7) /api/v1/namespaces/proxy-5341/services/http:proxy-service-tk2j2:portname1/proxy/: foo (200; 17.409081ms) Dec 16 13:09:03.969: INFO: (7) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:460/proxy/: tls baz (200; 17.749114ms) Dec 16 13:09:03.970: INFO: (7) /api/v1/namespaces/proxy-5341/services/https:proxy-service-tk2j2:tlsportname2/proxy/: tls qux (200; 17.981006ms) Dec 16 13:09:03.970: INFO: (7) /api/v1/namespaces/proxy-5341/services/http:proxy-service-tk2j2:portname2/proxy/: bar (200; 18.279614ms) Dec 16 13:09:03.970: INFO: (7) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:1080/proxy/: test<... (200; 18.456265ms) Dec 16 13:09:03.970: INFO: (7) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:160/proxy/: foo (200; 18.410111ms) Dec 16 13:09:03.973: INFO: (7) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:1080/proxy/: ... (200; 20.923194ms) Dec 16 13:09:03.973: INFO: (7) /api/v1/namespaces/proxy-5341/services/proxy-service-tk2j2:portname2/proxy/: bar (200; 21.533858ms) Dec 16 13:09:03.973: INFO: (7) /api/v1/namespaces/proxy-5341/services/proxy-service-tk2j2:portname1/proxy/: foo (200; 21.33025ms) Dec 16 13:09:03.974: INFO: (7) /api/v1/namespaces/proxy-5341/services/https:proxy-service-tk2j2:tlsportname1/proxy/: tls baz (200; 22.157443ms) Dec 16 13:09:04.001: INFO: (8) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:1080/proxy/: ... (200; 26.867108ms) Dec 16 13:09:04.001: INFO: (8) /api/v1/namespaces/proxy-5341/services/proxy-service-tk2j2:portname2/proxy/: bar (200; 27.008935ms) Dec 16 13:09:04.001: INFO: (8) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb/proxy/: test (200; 26.487486ms) Dec 16 13:09:04.001: INFO: (8) /api/v1/namespaces/proxy-5341/services/http:proxy-service-tk2j2:portname1/proxy/: foo (200; 26.451748ms) Dec 16 13:09:04.002: INFO: (8) /api/v1/namespaces/proxy-5341/services/proxy-service-tk2j2:portname1/proxy/: foo (200; 27.43824ms) Dec 16 13:09:04.002: INFO: (8) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:1080/proxy/: test<... (200; 27.054251ms) Dec 16 13:09:04.002: INFO: (8) /api/v1/namespaces/proxy-5341/services/http:proxy-service-tk2j2:portname2/proxy/: bar (200; 27.566266ms) Dec 16 13:09:04.002: INFO: (8) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:162/proxy/: bar (200; 28.026049ms) Dec 16 13:09:04.004: INFO: (8) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:162/proxy/: bar (200; 29.037803ms) Dec 16 13:09:04.004: INFO: (8) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:443/proxy/: test (200; 14.470499ms) Dec 16 13:09:04.022: INFO: (9) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:1080/proxy/: test<... (200; 15.3349ms) Dec 16 13:09:04.022: INFO: (9) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:162/proxy/: bar (200; 14.865862ms) Dec 16 13:09:04.024: INFO: (9) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:460/proxy/: tls baz (200; 16.591774ms) Dec 16 13:09:04.024: INFO: (9) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:160/proxy/: foo (200; 16.980963ms) Dec 16 13:09:04.028: INFO: (9) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:162/proxy/: bar (200; 21.322752ms) Dec 16 13:09:04.028: INFO: (9) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:462/proxy/: tls qux (200; 21.918275ms) Dec 16 13:09:04.029: INFO: (9) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:1080/proxy/: ... (200; 21.416201ms) Dec 16 13:09:04.029: INFO: (9) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:443/proxy/: test<... (200; 15.260243ms) Dec 16 13:09:04.057: INFO: (10) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:1080/proxy/: ... (200; 15.813036ms) Dec 16 13:09:04.058: INFO: (10) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:160/proxy/: foo (200; 16.826979ms) Dec 16 13:09:04.058: INFO: (10) /api/v1/namespaces/proxy-5341/services/proxy-service-tk2j2:portname2/proxy/: bar (200; 17.178733ms) Dec 16 13:09:04.059: INFO: (10) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb/proxy/: test (200; 17.646489ms) Dec 16 13:09:04.059: INFO: (10) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:162/proxy/: bar (200; 17.374115ms) Dec 16 13:09:04.062: INFO: (10) /api/v1/namespaces/proxy-5341/services/https:proxy-service-tk2j2:tlsportname1/proxy/: tls baz (200; 20.915769ms) Dec 16 13:09:04.063: INFO: (10) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:160/proxy/: foo (200; 21.947863ms) Dec 16 13:09:04.068: INFO: (11) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:460/proxy/: tls baz (200; 4.533578ms) Dec 16 13:09:04.072: INFO: (11) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:162/proxy/: bar (200; 7.868768ms) Dec 16 13:09:04.072: INFO: (11) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:1080/proxy/: ... (200; 8.156863ms) Dec 16 13:09:04.121: INFO: (11) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:160/proxy/: foo (200; 56.7498ms) Dec 16 13:09:04.121: INFO: (11) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:462/proxy/: tls qux (200; 56.702904ms) Dec 16 13:09:04.121: INFO: (11) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb/proxy/: test (200; 56.604667ms) Dec 16 13:09:04.123: INFO: (11) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:160/proxy/: foo (200; 59.195896ms) Dec 16 13:09:04.123: INFO: (11) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:162/proxy/: bar (200; 59.093118ms) Dec 16 13:09:04.124: INFO: (11) /api/v1/namespaces/proxy-5341/services/https:proxy-service-tk2j2:tlsportname2/proxy/: tls qux (200; 60.40634ms) Dec 16 13:09:04.124: INFO: (11) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:443/proxy/: test<... (200; 61.248803ms) Dec 16 13:09:04.126: INFO: (11) /api/v1/namespaces/proxy-5341/services/proxy-service-tk2j2:portname2/proxy/: bar (200; 61.357249ms) Dec 16 13:09:04.127: INFO: (11) /api/v1/namespaces/proxy-5341/services/http:proxy-service-tk2j2:portname2/proxy/: bar (200; 62.780791ms) Dec 16 13:09:04.127: INFO: (11) /api/v1/namespaces/proxy-5341/services/http:proxy-service-tk2j2:portname1/proxy/: foo (200; 62.807856ms) Dec 16 13:09:04.137: INFO: (12) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:1080/proxy/: ... (200; 9.88915ms) Dec 16 13:09:04.141: INFO: (12) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:160/proxy/: foo (200; 13.602726ms) Dec 16 13:09:04.142: INFO: (12) /api/v1/namespaces/proxy-5341/services/proxy-service-tk2j2:portname2/proxy/: bar (200; 14.316979ms) Dec 16 13:09:04.142: INFO: (12) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:1080/proxy/: test<... (200; 14.679504ms) Dec 16 13:09:04.142: INFO: (12) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:462/proxy/: tls qux (200; 15.277555ms) Dec 16 13:09:04.144: INFO: (12) /api/v1/namespaces/proxy-5341/services/http:proxy-service-tk2j2:portname1/proxy/: foo (200; 16.653605ms) Dec 16 13:09:04.144: INFO: (12) /api/v1/namespaces/proxy-5341/services/https:proxy-service-tk2j2:tlsportname2/proxy/: tls qux (200; 16.760993ms) Dec 16 13:09:04.144: INFO: (12) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:162/proxy/: bar (200; 16.719319ms) Dec 16 13:09:04.145: INFO: (12) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:160/proxy/: foo (200; 17.17294ms) Dec 16 13:09:04.145: INFO: (12) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:460/proxy/: tls baz (200; 17.180202ms) Dec 16 13:09:04.145: INFO: (12) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:162/proxy/: bar (200; 17.578707ms) Dec 16 13:09:04.145: INFO: (12) /api/v1/namespaces/proxy-5341/services/https:proxy-service-tk2j2:tlsportname1/proxy/: tls baz (200; 17.742706ms) Dec 16 13:09:04.145: INFO: (12) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb/proxy/: test (200; 17.694023ms) Dec 16 13:09:04.145: INFO: (12) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:443/proxy/: test (200; 7.605238ms) Dec 16 13:09:04.155: INFO: (13) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:1080/proxy/: test<... (200; 7.95876ms) Dec 16 13:09:04.155: INFO: (13) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:160/proxy/: foo (200; 7.922616ms) Dec 16 13:09:04.155: INFO: (13) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:460/proxy/: tls baz (200; 8.291938ms) Dec 16 13:09:04.155: INFO: (13) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:160/proxy/: foo (200; 8.130201ms) Dec 16 13:09:04.155: INFO: (13) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:443/proxy/: ... (200; 8.536378ms) Dec 16 13:09:04.156: INFO: (13) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:462/proxy/: tls qux (200; 8.761905ms) Dec 16 13:09:04.159: INFO: (13) /api/v1/namespaces/proxy-5341/services/http:proxy-service-tk2j2:portname1/proxy/: foo (200; 12.586022ms) Dec 16 13:09:04.160: INFO: (13) /api/v1/namespaces/proxy-5341/services/proxy-service-tk2j2:portname2/proxy/: bar (200; 12.747255ms) Dec 16 13:09:04.160: INFO: (13) /api/v1/namespaces/proxy-5341/services/https:proxy-service-tk2j2:tlsportname1/proxy/: tls baz (200; 13.081753ms) Dec 16 13:09:04.160: INFO: (13) /api/v1/namespaces/proxy-5341/services/https:proxy-service-tk2j2:tlsportname2/proxy/: tls qux (200; 13.341505ms) Dec 16 13:09:04.160: INFO: (13) /api/v1/namespaces/proxy-5341/services/http:proxy-service-tk2j2:portname2/proxy/: bar (200; 13.398115ms) Dec 16 13:09:04.161: INFO: (13) /api/v1/namespaces/proxy-5341/services/proxy-service-tk2j2:portname1/proxy/: foo (200; 13.705095ms) Dec 16 13:09:04.164: INFO: (14) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:1080/proxy/: ... (200; 3.316411ms) Dec 16 13:09:04.164: INFO: (14) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:162/proxy/: bar (200; 3.391908ms) Dec 16 13:09:04.165: INFO: (14) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:160/proxy/: foo (200; 4.147836ms) Dec 16 13:09:04.165: INFO: (14) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:462/proxy/: tls qux (200; 4.610054ms) Dec 16 13:09:04.166: INFO: (14) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb/proxy/: test (200; 5.49177ms) Dec 16 13:09:04.168: INFO: (14) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:460/proxy/: tls baz (200; 7.589015ms) Dec 16 13:09:04.168: INFO: (14) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:1080/proxy/: test<... (200; 7.600409ms) Dec 16 13:09:04.168: INFO: (14) /api/v1/namespaces/proxy-5341/services/proxy-service-tk2j2:portname1/proxy/: foo (200; 7.800443ms) Dec 16 13:09:04.168: INFO: (14) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:160/proxy/: foo (200; 7.593671ms) Dec 16 13:09:04.168: INFO: (14) /api/v1/namespaces/proxy-5341/services/http:proxy-service-tk2j2:portname2/proxy/: bar (200; 7.774813ms) Dec 16 13:09:04.168: INFO: (14) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:162/proxy/: bar (200; 7.755271ms) Dec 16 13:09:04.169: INFO: (14) /api/v1/namespaces/proxy-5341/services/http:proxy-service-tk2j2:portname1/proxy/: foo (200; 7.811732ms) Dec 16 13:09:04.169: INFO: (14) /api/v1/namespaces/proxy-5341/services/proxy-service-tk2j2:portname2/proxy/: bar (200; 8.433715ms) Dec 16 13:09:04.170: INFO: (14) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:443/proxy/: ... (200; 6.507553ms) Dec 16 13:09:04.182: INFO: (15) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb/proxy/: test (200; 6.989437ms) Dec 16 13:09:04.186: INFO: (15) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:162/proxy/: bar (200; 11.61377ms) Dec 16 13:09:04.187: INFO: (15) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:460/proxy/: tls baz (200; 11.676984ms) Dec 16 13:09:04.187: INFO: (15) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:1080/proxy/: test<... (200; 11.975767ms) Dec 16 13:09:04.187: INFO: (15) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:162/proxy/: bar (200; 12.10284ms) Dec 16 13:09:04.187: INFO: (15) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:160/proxy/: foo (200; 12.128572ms) Dec 16 13:09:04.187: INFO: (15) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:160/proxy/: foo (200; 12.311034ms) Dec 16 13:09:04.187: INFO: (15) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:443/proxy/: ... (200; 7.3052ms) Dec 16 13:09:04.198: INFO: (16) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:443/proxy/: test (200; 7.746514ms) Dec 16 13:09:04.199: INFO: (16) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:460/proxy/: tls baz (200; 8.465893ms) Dec 16 13:09:04.199: INFO: (16) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:1080/proxy/: test<... (200; 8.49327ms) Dec 16 13:09:04.199: INFO: (16) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:160/proxy/: foo (200; 8.713297ms) Dec 16 13:09:04.199: INFO: (16) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:162/proxy/: bar (200; 8.629136ms) Dec 16 13:09:04.202: INFO: (16) /api/v1/namespaces/proxy-5341/services/https:proxy-service-tk2j2:tlsportname2/proxy/: tls qux (200; 10.973448ms) Dec 16 13:09:04.202: INFO: (16) /api/v1/namespaces/proxy-5341/services/proxy-service-tk2j2:portname2/proxy/: bar (200; 11.046606ms) Dec 16 13:09:04.202: INFO: (16) /api/v1/namespaces/proxy-5341/services/http:proxy-service-tk2j2:portname1/proxy/: foo (200; 11.514408ms) Dec 16 13:09:04.202: INFO: (16) /api/v1/namespaces/proxy-5341/services/https:proxy-service-tk2j2:tlsportname1/proxy/: tls baz (200; 11.630346ms) Dec 16 13:09:04.202: INFO: (16) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:162/proxy/: bar (200; 11.780606ms) Dec 16 13:09:04.203: INFO: (16) /api/v1/namespaces/proxy-5341/services/proxy-service-tk2j2:portname1/proxy/: foo (200; 12.645769ms) Dec 16 13:09:04.203: INFO: (16) /api/v1/namespaces/proxy-5341/services/http:proxy-service-tk2j2:portname2/proxy/: bar (200; 12.668872ms) Dec 16 13:09:04.210: INFO: (17) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:460/proxy/: tls baz (200; 6.91531ms) Dec 16 13:09:04.211: INFO: (17) /api/v1/namespaces/proxy-5341/services/proxy-service-tk2j2:portname1/proxy/: foo (200; 7.920588ms) Dec 16 13:09:04.212: INFO: (17) /api/v1/namespaces/proxy-5341/services/https:proxy-service-tk2j2:tlsportname2/proxy/: tls qux (200; 8.81949ms) Dec 16 13:09:04.213: INFO: (17) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:462/proxy/: tls qux (200; 9.240329ms) Dec 16 13:09:04.213: INFO: (17) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:443/proxy/: test<... (200; 9.414443ms) Dec 16 13:09:04.213: INFO: (17) /api/v1/namespaces/proxy-5341/services/proxy-service-tk2j2:portname2/proxy/: bar (200; 10.040579ms) Dec 16 13:09:04.213: INFO: (17) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:160/proxy/: foo (200; 10.072927ms) Dec 16 13:09:04.214: INFO: (17) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:1080/proxy/: ... (200; 10.172354ms) Dec 16 13:09:04.214: INFO: (17) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb/proxy/: test (200; 10.112062ms) Dec 16 13:09:04.214: INFO: (17) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:160/proxy/: foo (200; 10.233129ms) Dec 16 13:09:04.214: INFO: (17) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:162/proxy/: bar (200; 10.365044ms) Dec 16 13:09:04.214: INFO: (17) /api/v1/namespaces/proxy-5341/services/https:proxy-service-tk2j2:tlsportname1/proxy/: tls baz (200; 10.879164ms) Dec 16 13:09:04.214: INFO: (17) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:162/proxy/: bar (200; 11.134738ms) Dec 16 13:09:04.215: INFO: (17) /api/v1/namespaces/proxy-5341/services/http:proxy-service-tk2j2:portname1/proxy/: foo (200; 11.371942ms) Dec 16 13:09:04.220: INFO: (18) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:160/proxy/: foo (200; 4.86943ms) Dec 16 13:09:04.222: INFO: (18) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb/proxy/: test (200; 7.202884ms) Dec 16 13:09:04.223: INFO: (18) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:462/proxy/: tls qux (200; 7.584928ms) Dec 16 13:09:04.223: INFO: (18) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:162/proxy/: bar (200; 7.782412ms) Dec 16 13:09:04.223: INFO: (18) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:160/proxy/: foo (200; 8.355002ms) Dec 16 13:09:04.224: INFO: (18) /api/v1/namespaces/proxy-5341/services/https:proxy-service-tk2j2:tlsportname1/proxy/: tls baz (200; 8.474519ms) Dec 16 13:09:04.224: INFO: (18) /api/v1/namespaces/proxy-5341/pods/http:proxy-service-tk2j2-24lkb:1080/proxy/: ... (200; 8.571806ms) Dec 16 13:09:04.224: INFO: (18) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:460/proxy/: tls baz (200; 8.541445ms) Dec 16 13:09:04.224: INFO: (18) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:162/proxy/: bar (200; 8.647185ms) Dec 16 13:09:04.224: INFO: (18) /api/v1/namespaces/proxy-5341/services/proxy-service-tk2j2:portname1/proxy/: foo (200; 9.035692ms) Dec 16 13:09:04.224: INFO: (18) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:1080/proxy/: test<... (200; 9.280027ms) Dec 16 13:09:04.224: INFO: (18) /api/v1/namespaces/proxy-5341/services/http:proxy-service-tk2j2:portname2/proxy/: bar (200; 9.385866ms) Dec 16 13:09:04.225: INFO: (18) /api/v1/namespaces/proxy-5341/services/proxy-service-tk2j2:portname2/proxy/: bar (200; 9.699726ms) Dec 16 13:09:04.225: INFO: (18) /api/v1/namespaces/proxy-5341/services/http:proxy-service-tk2j2:portname1/proxy/: foo (200; 9.983934ms) Dec 16 13:09:04.228: INFO: (18) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:443/proxy/: ... (200; 10.135678ms) Dec 16 13:09:04.243: INFO: (19) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:160/proxy/: foo (200; 10.153104ms) Dec 16 13:09:04.243: INFO: (19) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:162/proxy/: bar (200; 11.043207ms) Dec 16 13:09:04.246: INFO: (19) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb/proxy/: test (200; 13.534477ms) Dec 16 13:09:04.246: INFO: (19) /api/v1/namespaces/proxy-5341/pods/proxy-service-tk2j2-24lkb:1080/proxy/: test<... (200; 13.549938ms) Dec 16 13:09:04.246: INFO: (19) /api/v1/namespaces/proxy-5341/services/http:proxy-service-tk2j2:portname2/proxy/: bar (200; 13.740654ms) Dec 16 13:09:04.246: INFO: (19) /api/v1/namespaces/proxy-5341/services/proxy-service-tk2j2:portname1/proxy/: foo (200; 13.520189ms) Dec 16 13:09:04.246: INFO: (19) /api/v1/namespaces/proxy-5341/pods/https:proxy-service-tk2j2-24lkb:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 16 13:09:23.013: INFO: Waiting up to 5m0s for pod "downwardapi-volume-df4f9568-f532-4d8e-af9c-8b56a8c4be11" in namespace "downward-api-302" to be "success or failure" Dec 16 13:09:23.018: INFO: Pod "downwardapi-volume-df4f9568-f532-4d8e-af9c-8b56a8c4be11": Phase="Pending", Reason="", readiness=false. Elapsed: 4.638692ms Dec 16 13:09:25.025: INFO: Pod "downwardapi-volume-df4f9568-f532-4d8e-af9c-8b56a8c4be11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012192979s Dec 16 13:09:27.033: INFO: Pod "downwardapi-volume-df4f9568-f532-4d8e-af9c-8b56a8c4be11": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019425889s Dec 16 13:09:29.039: INFO: Pod "downwardapi-volume-df4f9568-f532-4d8e-af9c-8b56a8c4be11": Phase="Pending", Reason="", readiness=false. Elapsed: 6.025868731s Dec 16 13:09:31.060: INFO: Pod "downwardapi-volume-df4f9568-f532-4d8e-af9c-8b56a8c4be11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.046379239s STEP: Saw pod success Dec 16 13:09:31.060: INFO: Pod "downwardapi-volume-df4f9568-f532-4d8e-af9c-8b56a8c4be11" satisfied condition "success or failure" Dec 16 13:09:31.070: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-df4f9568-f532-4d8e-af9c-8b56a8c4be11 container client-container: STEP: delete the pod Dec 16 13:09:31.247: INFO: Waiting for pod downwardapi-volume-df4f9568-f532-4d8e-af9c-8b56a8c4be11 to disappear Dec 16 13:09:31.254: INFO: Pod downwardapi-volume-df4f9568-f532-4d8e-af9c-8b56a8c4be11 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:09:31.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-302" for this suite. Dec 16 13:09:37.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:09:37.436: INFO: namespace downward-api-302 deletion completed in 6.176094912s • [SLOW TEST:14.613 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:09:37.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Dec 16 13:09:37.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3847' Dec 16 13:09:38.092: INFO: stderr: "" Dec 16 13:09:38.092: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Dec 16 13:09:39.102: INFO: Selector matched 1 pods for map[app:redis] Dec 16 13:09:39.102: INFO: Found 0 / 1 Dec 16 13:09:40.135: INFO: Selector matched 1 pods for map[app:redis] Dec 16 13:09:40.135: INFO: Found 0 / 1 Dec 16 13:09:41.110: INFO: Selector matched 1 pods for map[app:redis] Dec 16 13:09:41.110: INFO: Found 0 / 1 Dec 16 13:09:42.105: INFO: Selector matched 1 pods for map[app:redis] Dec 16 13:09:42.105: INFO: Found 0 / 1 Dec 16 13:09:43.100: INFO: Selector matched 1 pods for map[app:redis] Dec 16 13:09:43.101: INFO: Found 0 / 1 Dec 16 13:09:44.099: INFO: Selector matched 1 pods for map[app:redis] Dec 16 13:09:44.099: INFO: Found 0 / 1 Dec 16 13:09:45.110: INFO: Selector matched 1 pods for map[app:redis] Dec 16 13:09:45.110: INFO: Found 0 / 1 Dec 16 13:09:46.099: INFO: Selector matched 1 pods for map[app:redis] Dec 16 13:09:46.099: INFO: Found 1 / 1 Dec 16 13:09:46.099: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Dec 16 13:09:46.115: INFO: Selector matched 1 pods for map[app:redis] Dec 16 13:09:46.115: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Dec 16 13:09:46.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-zf7kj redis-master --namespace=kubectl-3847' Dec 16 13:09:46.322: INFO: stderr: "" Dec 16 13:09:46.322: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 16 Dec 13:09:44.961 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 16 Dec 13:09:44.961 # Server started, Redis version 3.2.12\n1:M 16 Dec 13:09:44.962 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 16 Dec 13:09:44.963 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Dec 16 13:09:46.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-zf7kj redis-master --namespace=kubectl-3847 --tail=1' Dec 16 13:09:46.652: INFO: stderr: "" Dec 16 13:09:46.653: INFO: stdout: "1:M 16 Dec 13:09:44.963 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Dec 16 13:09:46.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-zf7kj redis-master --namespace=kubectl-3847 --limit-bytes=1' Dec 16 13:09:46.780: INFO: stderr: "" Dec 16 13:09:46.780: INFO: stdout: " " STEP: exposing timestamps Dec 16 13:09:46.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-zf7kj redis-master --namespace=kubectl-3847 --tail=1 --timestamps' Dec 16 13:09:46.919: INFO: stderr: "" Dec 16 13:09:46.920: INFO: stdout: "2019-12-16T13:09:44.96381252Z 1:M 16 Dec 13:09:44.963 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Dec 16 13:09:49.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-zf7kj redis-master --namespace=kubectl-3847 --since=1s' Dec 16 13:09:49.642: INFO: stderr: "" Dec 16 13:09:49.643: INFO: stdout: "" Dec 16 13:09:49.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-zf7kj redis-master --namespace=kubectl-3847 --since=24h' Dec 16 13:09:49.824: INFO: stderr: "" Dec 16 13:09:49.825: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 16 Dec 13:09:44.961 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 16 Dec 13:09:44.961 # Server started, Redis version 3.2.12\n1:M 16 Dec 13:09:44.962 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 16 Dec 13:09:44.963 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Dec 16 13:09:49.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3847' Dec 16 13:09:50.042: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 16 13:09:50.042: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Dec 16 13:09:50.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-3847' Dec 16 13:09:50.186: INFO: stderr: "No resources found.\n" Dec 16 13:09:50.187: INFO: stdout: "" Dec 16 13:09:50.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-3847 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 16 13:09:50.287: INFO: stderr: "" Dec 16 13:09:50.287: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:09:50.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3847" for this suite. Dec 16 13:10:12.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:10:12.555: INFO: namespace kubectl-3847 deletion completed in 22.260686241s • [SLOW TEST:35.119 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:10:12.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 16 13:10:12.703: INFO: Waiting up to 5m0s for pod "downwardapi-volume-94239728-2aba-439f-a15b-1b9cc01f5f36" in namespace "downward-api-6581" to be "success or failure" Dec 16 13:10:12.714: INFO: Pod "downwardapi-volume-94239728-2aba-439f-a15b-1b9cc01f5f36": Phase="Pending", Reason="", readiness=false. Elapsed: 9.868587ms Dec 16 13:10:14.724: INFO: Pod "downwardapi-volume-94239728-2aba-439f-a15b-1b9cc01f5f36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020610447s Dec 16 13:10:16.741: INFO: Pod "downwardapi-volume-94239728-2aba-439f-a15b-1b9cc01f5f36": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03689842s Dec 16 13:10:18.750: INFO: Pod "downwardapi-volume-94239728-2aba-439f-a15b-1b9cc01f5f36": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046011787s Dec 16 13:10:20.983: INFO: Pod "downwardapi-volume-94239728-2aba-439f-a15b-1b9cc01f5f36": Phase="Pending", Reason="", readiness=false. Elapsed: 8.278944715s Dec 16 13:10:22.994: INFO: Pod "downwardapi-volume-94239728-2aba-439f-a15b-1b9cc01f5f36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.290429825s STEP: Saw pod success Dec 16 13:10:22.994: INFO: Pod "downwardapi-volume-94239728-2aba-439f-a15b-1b9cc01f5f36" satisfied condition "success or failure" Dec 16 13:10:22.998: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-94239728-2aba-439f-a15b-1b9cc01f5f36 container client-container: STEP: delete the pod Dec 16 13:10:23.090: INFO: Waiting for pod downwardapi-volume-94239728-2aba-439f-a15b-1b9cc01f5f36 to disappear Dec 16 13:10:23.146: INFO: Pod downwardapi-volume-94239728-2aba-439f-a15b-1b9cc01f5f36 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:10:23.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6581" for this suite. Dec 16 13:10:29.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:10:29.333: INFO: namespace downward-api-6581 deletion completed in 6.178773381s • [SLOW TEST:16.777 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:10:29.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Dec 16 13:10:29.520: INFO: Waiting up to 5m0s for pod "pod-a600c946-360d-4c57-b9f7-f0a46c109894" in namespace "emptydir-9562" to be "success or failure" Dec 16 13:10:29.531: INFO: Pod "pod-a600c946-360d-4c57-b9f7-f0a46c109894": Phase="Pending", Reason="", readiness=false. Elapsed: 11.449342ms Dec 16 13:10:31.540: INFO: Pod "pod-a600c946-360d-4c57-b9f7-f0a46c109894": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020389915s Dec 16 13:10:33.550: INFO: Pod "pod-a600c946-360d-4c57-b9f7-f0a46c109894": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030371327s Dec 16 13:10:35.561: INFO: Pod "pod-a600c946-360d-4c57-b9f7-f0a46c109894": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041120004s Dec 16 13:10:37.575: INFO: Pod "pod-a600c946-360d-4c57-b9f7-f0a46c109894": Phase="Pending", Reason="", readiness=false. Elapsed: 8.054979943s Dec 16 13:10:39.585: INFO: Pod "pod-a600c946-360d-4c57-b9f7-f0a46c109894": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.065386451s STEP: Saw pod success Dec 16 13:10:39.585: INFO: Pod "pod-a600c946-360d-4c57-b9f7-f0a46c109894" satisfied condition "success or failure" Dec 16 13:10:39.589: INFO: Trying to get logs from node iruya-node pod pod-a600c946-360d-4c57-b9f7-f0a46c109894 container test-container: STEP: delete the pod Dec 16 13:10:39.688: INFO: Waiting for pod pod-a600c946-360d-4c57-b9f7-f0a46c109894 to disappear Dec 16 13:10:39.697: INFO: Pod pod-a600c946-360d-4c57-b9f7-f0a46c109894 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:10:39.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9562" for this suite. Dec 16 13:10:45.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:10:45.902: INFO: namespace emptydir-9562 deletion completed in 6.196224251s • [SLOW TEST:16.569 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:10:45.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:10:57.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4529" for this suite. Dec 16 13:11:21.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:11:21.971: INFO: namespace replication-controller-4529 deletion completed in 24.182773208s • [SLOW TEST:36.069 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:11:21.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 16 13:11:22.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-884' Dec 16 13:11:24.973: INFO: stderr: "" Dec 16 13:11:24.973: INFO: stdout: "replicationcontroller/redis-master created\n" Dec 16 13:11:24.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-884' Dec 16 13:11:25.703: INFO: stderr: "" Dec 16 13:11:25.703: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Dec 16 13:11:26.720: INFO: Selector matched 1 pods for map[app:redis] Dec 16 13:11:26.720: INFO: Found 0 / 1 Dec 16 13:11:27.714: INFO: Selector matched 1 pods for map[app:redis] Dec 16 13:11:27.714: INFO: Found 0 / 1 Dec 16 13:11:28.734: INFO: Selector matched 1 pods for map[app:redis] Dec 16 13:11:28.734: INFO: Found 0 / 1 Dec 16 13:11:29.716: INFO: Selector matched 1 pods for map[app:redis] Dec 16 13:11:29.717: INFO: Found 0 / 1 Dec 16 13:11:30.718: INFO: Selector matched 1 pods for map[app:redis] Dec 16 13:11:30.718: INFO: Found 0 / 1 Dec 16 13:11:31.714: INFO: Selector matched 1 pods for map[app:redis] Dec 16 13:11:31.714: INFO: Found 0 / 1 Dec 16 13:11:32.717: INFO: Selector matched 1 pods for map[app:redis] Dec 16 13:11:32.718: INFO: Found 0 / 1 Dec 16 13:11:33.714: INFO: Selector matched 1 pods for map[app:redis] Dec 16 13:11:33.714: INFO: Found 0 / 1 Dec 16 13:11:34.730: INFO: Selector matched 1 pods for map[app:redis] Dec 16 13:11:34.731: INFO: Found 1 / 1 Dec 16 13:11:34.731: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Dec 16 13:11:34.748: INFO: Selector matched 1 pods for map[app:redis] Dec 16 13:11:34.748: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Dec 16 13:11:34.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-6m4pv --namespace=kubectl-884' Dec 16 13:11:35.187: INFO: stderr: "" Dec 16 13:11:35.187: INFO: stdout: "Name: redis-master-6m4pv\nNamespace: kubectl-884\nPriority: 0\nNode: iruya-node/10.96.3.65\nStart Time: Mon, 16 Dec 2019 13:11:25 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.44.0.1\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: docker://4eb79684688ac0a7e3c8fd6c835dcad64e4fbf8d7f8523b277a2353e4e586fc4\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 16 Dec 2019 13:11:33 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-czq4v (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-czq4v:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-czq4v\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 10s default-scheduler Successfully assigned kubectl-884/redis-master-6m4pv to iruya-node\n Normal Pulled 5s kubelet, iruya-node Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 3s kubelet, iruya-node Created container redis-master\n Normal Started 2s kubelet, iruya-node Started container redis-master\n" Dec 16 13:11:35.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-884' Dec 16 13:11:35.305: INFO: stderr: "" Dec 16 13:11:35.305: INFO: stdout: "Name: redis-master\nNamespace: kubectl-884\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 10s replication-controller Created pod: redis-master-6m4pv\n" Dec 16 13:11:35.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-884' Dec 16 13:11:35.425: INFO: stderr: "" Dec 16 13:11:35.426: INFO: stdout: "Name: redis-master\nNamespace: kubectl-884\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.100.202.64\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.44.0.1:6379\nSession Affinity: None\nEvents: \n" Dec 16 13:11:35.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node' Dec 16 13:11:35.561: INFO: stderr: "" Dec 16 13:11:35.561: INFO: stdout: "Name: iruya-node\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-node\n kubernetes.io/os=linux\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 04 Aug 2019 09:01:39 +0000\nTaints: \nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Sat, 12 Oct 2019 11:56:49 +0000 Sat, 12 Oct 2019 11:56:49 +0000 WeaveIsUp Weave pod has set this\n MemoryPressure False Mon, 16 Dec 2019 13:11:03 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 16 Dec 2019 13:11:03 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 16 Dec 2019 13:11:03 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 16 Dec 2019 13:11:03 +0000 Sun, 04 Aug 2019 09:02:19 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled\nAddresses:\n InternalIP: 10.96.3.65\n Hostname: iruya-node\nCapacity:\n cpu: 4\n ephemeral-storage: 20145724Ki\n hugepages-2Mi: 0\n memory: 4039076Ki\n pods: 110\nAllocatable:\n cpu: 4\n ephemeral-storage: 18566299208\n hugepages-2Mi: 0\n memory: 3936676Ki\n pods: 110\nSystem Info:\n Machine ID: f573dcf04d6f4a87856a35d266a2fa7a\n System UUID: F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID: 8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version: 4.15.0-52-generic\n OS Image: Ubuntu 18.04.2 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://18.9.7\n Kubelet Version: v1.15.1\n Kube-Proxy Version: v1.15.1\nPodCIDR: 10.96.1.0/24\nNon-terminated Pods: (3 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system kube-proxy-976zl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 134d\n kube-system weave-net-rlp57 20m (0%) 0 (0%) 0 (0%) 0 (0%) 65d\n kubectl-884 redis-master-6m4pv 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11s\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 20m (0%) 0 (0%)\n memory 0 (0%) 0 (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Dec 16 13:11:35.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-884' Dec 16 13:11:35.692: INFO: stderr: "" Dec 16 13:11:35.692: INFO: stdout: "Name: kubectl-884\nLabels: e2e-framework=kubectl\n e2e-run=af1c7816-4fc5-424d-b60a-91f1ced6b809\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:11:35.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-884" for this suite. Dec 16 13:11:57.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:11:57.864: INFO: namespace kubectl-884 deletion completed in 22.165810514s • [SLOW TEST:35.893 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:11:57.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-b029869b-ad79-43ef-87e7-124f67b2e822 STEP: Creating a pod to test consume secrets Dec 16 13:11:57.959: INFO: Waiting up to 5m0s for pod "pod-secrets-9f8d9e79-1d33-46dd-81c9-cb95a6c239f1" in namespace "secrets-1513" to be "success or failure" Dec 16 13:11:57.981: INFO: Pod "pod-secrets-9f8d9e79-1d33-46dd-81c9-cb95a6c239f1": Phase="Pending", Reason="", readiness=false. Elapsed: 22.324283ms Dec 16 13:11:59.995: INFO: Pod "pod-secrets-9f8d9e79-1d33-46dd-81c9-cb95a6c239f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036693126s Dec 16 13:12:02.018: INFO: Pod "pod-secrets-9f8d9e79-1d33-46dd-81c9-cb95a6c239f1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059688931s Dec 16 13:12:04.024: INFO: Pod "pod-secrets-9f8d9e79-1d33-46dd-81c9-cb95a6c239f1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065509743s Dec 16 13:12:06.031: INFO: Pod "pod-secrets-9f8d9e79-1d33-46dd-81c9-cb95a6c239f1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.071949706s Dec 16 13:12:08.060: INFO: Pod "pod-secrets-9f8d9e79-1d33-46dd-81c9-cb95a6c239f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.101521705s STEP: Saw pod success Dec 16 13:12:08.061: INFO: Pod "pod-secrets-9f8d9e79-1d33-46dd-81c9-cb95a6c239f1" satisfied condition "success or failure" Dec 16 13:12:08.070: INFO: Trying to get logs from node iruya-node pod pod-secrets-9f8d9e79-1d33-46dd-81c9-cb95a6c239f1 container secret-volume-test: STEP: delete the pod Dec 16 13:12:08.265: INFO: Waiting for pod pod-secrets-9f8d9e79-1d33-46dd-81c9-cb95a6c239f1 to disappear Dec 16 13:12:08.274: INFO: Pod pod-secrets-9f8d9e79-1d33-46dd-81c9-cb95a6c239f1 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:12:08.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1513" for this suite. Dec 16 13:12:14.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:12:14.455: INFO: namespace secrets-1513 deletion completed in 6.170773489s • [SLOW TEST:16.590 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:12:14.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-e66ee8bd-c578-4a27-86f3-2f89d7ce6cd5 STEP: Creating a pod to test consume secrets Dec 16 13:12:14.698: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-72be41d5-d5fc-41dc-bab1-dd611f0dd5bd" in namespace "projected-2881" to be "success or failure" Dec 16 13:12:14.706: INFO: Pod "pod-projected-secrets-72be41d5-d5fc-41dc-bab1-dd611f0dd5bd": Phase="Pending", Reason="", readiness=false. Elapsed: 7.672413ms Dec 16 13:12:16.721: INFO: Pod "pod-projected-secrets-72be41d5-d5fc-41dc-bab1-dd611f0dd5bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021933573s Dec 16 13:12:18.739: INFO: Pod "pod-projected-secrets-72be41d5-d5fc-41dc-bab1-dd611f0dd5bd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039957605s Dec 16 13:12:20.754: INFO: Pod "pod-projected-secrets-72be41d5-d5fc-41dc-bab1-dd611f0dd5bd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055099456s Dec 16 13:12:22.768: INFO: Pod "pod-projected-secrets-72be41d5-d5fc-41dc-bab1-dd611f0dd5bd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.068848025s Dec 16 13:12:24.789: INFO: Pod "pod-projected-secrets-72be41d5-d5fc-41dc-bab1-dd611f0dd5bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.090730317s STEP: Saw pod success Dec 16 13:12:24.790: INFO: Pod "pod-projected-secrets-72be41d5-d5fc-41dc-bab1-dd611f0dd5bd" satisfied condition "success or failure" Dec 16 13:12:24.805: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-72be41d5-d5fc-41dc-bab1-dd611f0dd5bd container projected-secret-volume-test: STEP: delete the pod Dec 16 13:12:24.935: INFO: Waiting for pod pod-projected-secrets-72be41d5-d5fc-41dc-bab1-dd611f0dd5bd to disappear Dec 16 13:12:24.963: INFO: Pod pod-projected-secrets-72be41d5-d5fc-41dc-bab1-dd611f0dd5bd no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:12:24.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2881" for this suite. Dec 16 13:12:31.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:12:31.150: INFO: namespace projected-2881 deletion completed in 6.178591726s • [SLOW TEST:16.694 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:12:31.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 16 13:12:31.245: INFO: Waiting up to 5m0s for pod "downwardapi-volume-290b3af6-1725-4375-a0c8-f8bad0a5c936" in namespace "downward-api-4552" to be "success or failure" Dec 16 13:12:31.250: INFO: Pod "downwardapi-volume-290b3af6-1725-4375-a0c8-f8bad0a5c936": Phase="Pending", Reason="", readiness=false. Elapsed: 4.591644ms Dec 16 13:12:33.261: INFO: Pod "downwardapi-volume-290b3af6-1725-4375-a0c8-f8bad0a5c936": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015595453s Dec 16 13:12:35.273: INFO: Pod "downwardapi-volume-290b3af6-1725-4375-a0c8-f8bad0a5c936": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02721834s Dec 16 13:12:37.292: INFO: Pod "downwardapi-volume-290b3af6-1725-4375-a0c8-f8bad0a5c936": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046225482s Dec 16 13:12:39.310: INFO: Pod "downwardapi-volume-290b3af6-1725-4375-a0c8-f8bad0a5c936": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064393856s Dec 16 13:12:41.321: INFO: Pod "downwardapi-volume-290b3af6-1725-4375-a0c8-f8bad0a5c936": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.076074567s STEP: Saw pod success Dec 16 13:12:41.322: INFO: Pod "downwardapi-volume-290b3af6-1725-4375-a0c8-f8bad0a5c936" satisfied condition "success or failure" Dec 16 13:12:41.330: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-290b3af6-1725-4375-a0c8-f8bad0a5c936 container client-container: STEP: delete the pod Dec 16 13:12:41.505: INFO: Waiting for pod downwardapi-volume-290b3af6-1725-4375-a0c8-f8bad0a5c936 to disappear Dec 16 13:12:41.515: INFO: Pod downwardapi-volume-290b3af6-1725-4375-a0c8-f8bad0a5c936 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:12:41.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4552" for this suite. Dec 16 13:12:47.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:12:47.705: INFO: namespace downward-api-4552 deletion completed in 6.178931105s • [SLOW TEST:16.552 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:12:47.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-7jsh STEP: Creating a pod to test atomic-volume-subpath Dec 16 13:12:47.955: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-7jsh" in namespace "subpath-4383" to be "success or failure" Dec 16 13:12:47.961: INFO: Pod "pod-subpath-test-configmap-7jsh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.111402ms Dec 16 13:12:49.967: INFO: Pod "pod-subpath-test-configmap-7jsh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012296536s Dec 16 13:12:51.977: INFO: Pod "pod-subpath-test-configmap-7jsh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022289302s Dec 16 13:12:53.991: INFO: Pod "pod-subpath-test-configmap-7jsh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03565269s Dec 16 13:12:56.018: INFO: Pod "pod-subpath-test-configmap-7jsh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063120866s Dec 16 13:12:58.024: INFO: Pod "pod-subpath-test-configmap-7jsh": Phase="Running", Reason="", readiness=true. Elapsed: 10.069314645s Dec 16 13:13:00.036: INFO: Pod "pod-subpath-test-configmap-7jsh": Phase="Running", Reason="", readiness=true. Elapsed: 12.081278602s Dec 16 13:13:02.049: INFO: Pod "pod-subpath-test-configmap-7jsh": Phase="Running", Reason="", readiness=true. Elapsed: 14.093595928s Dec 16 13:13:04.059: INFO: Pod "pod-subpath-test-configmap-7jsh": Phase="Running", Reason="", readiness=true. Elapsed: 16.104155376s Dec 16 13:13:06.069: INFO: Pod "pod-subpath-test-configmap-7jsh": Phase="Running", Reason="", readiness=true. Elapsed: 18.11419303s Dec 16 13:13:08.076: INFO: Pod "pod-subpath-test-configmap-7jsh": Phase="Running", Reason="", readiness=true. Elapsed: 20.120798696s Dec 16 13:13:10.085: INFO: Pod "pod-subpath-test-configmap-7jsh": Phase="Running", Reason="", readiness=true. Elapsed: 22.12952855s Dec 16 13:13:12.093: INFO: Pod "pod-subpath-test-configmap-7jsh": Phase="Running", Reason="", readiness=true. Elapsed: 24.138142908s Dec 16 13:13:14.102: INFO: Pod "pod-subpath-test-configmap-7jsh": Phase="Running", Reason="", readiness=true. Elapsed: 26.146431839s Dec 16 13:13:16.109: INFO: Pod "pod-subpath-test-configmap-7jsh": Phase="Running", Reason="", readiness=true. Elapsed: 28.153566183s Dec 16 13:13:18.115: INFO: Pod "pod-subpath-test-configmap-7jsh": Phase="Running", Reason="", readiness=true. Elapsed: 30.16024788s Dec 16 13:13:20.128: INFO: Pod "pod-subpath-test-configmap-7jsh": Phase="Running", Reason="", readiness=true. Elapsed: 32.172789485s Dec 16 13:13:22.138: INFO: Pod "pod-subpath-test-configmap-7jsh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.182917462s STEP: Saw pod success Dec 16 13:13:22.138: INFO: Pod "pod-subpath-test-configmap-7jsh" satisfied condition "success or failure" Dec 16 13:13:22.143: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-7jsh container test-container-subpath-configmap-7jsh: STEP: delete the pod Dec 16 13:13:22.520: INFO: Waiting for pod pod-subpath-test-configmap-7jsh to disappear Dec 16 13:13:22.526: INFO: Pod pod-subpath-test-configmap-7jsh no longer exists STEP: Deleting pod pod-subpath-test-configmap-7jsh Dec 16 13:13:22.526: INFO: Deleting pod "pod-subpath-test-configmap-7jsh" in namespace "subpath-4383" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:13:22.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4383" for this suite. Dec 16 13:13:28.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:13:28.696: INFO: namespace subpath-4383 deletion completed in 6.161517315s • [SLOW TEST:40.989 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:13:28.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-30830153-9d08-4b3e-bd34-1bfaef4da6cf STEP: Creating a pod to test consume secrets Dec 16 13:13:28.985: INFO: Waiting up to 5m0s for pod "pod-secrets-87d442da-9c4f-4648-b0d4-ca42db6b5783" in namespace "secrets-7210" to be "success or failure" Dec 16 13:13:28.993: INFO: Pod "pod-secrets-87d442da-9c4f-4648-b0d4-ca42db6b5783": Phase="Pending", Reason="", readiness=false. Elapsed: 7.556511ms Dec 16 13:13:31.005: INFO: Pod "pod-secrets-87d442da-9c4f-4648-b0d4-ca42db6b5783": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020420228s Dec 16 13:13:33.028: INFO: Pod "pod-secrets-87d442da-9c4f-4648-b0d4-ca42db6b5783": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042627456s Dec 16 13:13:35.036: INFO: Pod "pod-secrets-87d442da-9c4f-4648-b0d4-ca42db6b5783": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050700995s Dec 16 13:13:37.047: INFO: Pod "pod-secrets-87d442da-9c4f-4648-b0d4-ca42db6b5783": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062091789s Dec 16 13:13:39.139: INFO: Pod "pod-secrets-87d442da-9c4f-4648-b0d4-ca42db6b5783": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.154271037s STEP: Saw pod success Dec 16 13:13:39.140: INFO: Pod "pod-secrets-87d442da-9c4f-4648-b0d4-ca42db6b5783" satisfied condition "success or failure" Dec 16 13:13:39.149: INFO: Trying to get logs from node iruya-node pod pod-secrets-87d442da-9c4f-4648-b0d4-ca42db6b5783 container secret-env-test: STEP: delete the pod Dec 16 13:13:39.233: INFO: Waiting for pod pod-secrets-87d442da-9c4f-4648-b0d4-ca42db6b5783 to disappear Dec 16 13:13:39.372: INFO: Pod pod-secrets-87d442da-9c4f-4648-b0d4-ca42db6b5783 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:13:39.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7210" for this suite. Dec 16 13:13:45.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:13:45.540: INFO: namespace secrets-7210 deletion completed in 6.157954234s • [SLOW TEST:16.843 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:13:45.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 16 13:13:45.882: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Dec 16 13:13:45.904: INFO: Number of nodes with available pods: 0 Dec 16 13:13:45.904: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Dec 16 13:13:46.084: INFO: Number of nodes with available pods: 0 Dec 16 13:13:46.084: INFO: Node iruya-node is running more than one daemon pod Dec 16 13:13:47.093: INFO: Number of nodes with available pods: 0 Dec 16 13:13:47.093: INFO: Node iruya-node is running more than one daemon pod Dec 16 13:13:48.113: INFO: Number of nodes with available pods: 0 Dec 16 13:13:48.113: INFO: Node iruya-node is running more than one daemon pod Dec 16 13:13:49.095: INFO: Number of nodes with available pods: 0 Dec 16 13:13:49.095: INFO: Node iruya-node is running more than one daemon pod Dec 16 13:13:50.141: INFO: Number of nodes with available pods: 0 Dec 16 13:13:50.141: INFO: Node iruya-node is running more than one daemon pod Dec 16 13:13:51.105: INFO: Number of nodes with available pods: 0 Dec 16 13:13:51.105: INFO: Node iruya-node is running more than one daemon pod Dec 16 13:13:52.103: INFO: Number of nodes with available pods: 0 Dec 16 13:13:52.103: INFO: Node iruya-node is running more than one daemon pod Dec 16 13:13:53.096: INFO: Number of nodes with available pods: 0 Dec 16 13:13:53.096: INFO: Node iruya-node is running more than one daemon pod Dec 16 13:13:54.094: INFO: Number of nodes with available pods: 0 Dec 16 13:13:54.094: INFO: Node iruya-node is running more than one daemon pod Dec 16 13:13:55.863: INFO: Number of nodes with available pods: 0 Dec 16 13:13:55.863: INFO: Node iruya-node is running more than one daemon pod Dec 16 13:13:56.091: INFO: Number of nodes with available pods: 1 Dec 16 13:13:56.091: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Dec 16 13:13:56.220: INFO: Number of nodes with available pods: 1 Dec 16 13:13:56.220: INFO: Number of running nodes: 0, number of available pods: 1 Dec 16 13:13:57.234: INFO: Number of nodes with available pods: 0 Dec 16 13:13:57.234: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Dec 16 13:13:57.255: INFO: Number of nodes with available pods: 0 Dec 16 13:13:57.255: INFO: Node iruya-node is running more than one daemon pod Dec 16 13:13:58.265: INFO: Number of nodes with available pods: 0 Dec 16 13:13:58.266: INFO: Node iruya-node is running more than one daemon pod Dec 16 13:13:59.265: INFO: Number of nodes with available pods: 0 Dec 16 13:13:59.265: INFO: Node iruya-node is running more than one daemon pod Dec 16 13:14:00.268: INFO: Number of nodes with available pods: 0 Dec 16 13:14:00.268: INFO: Node iruya-node is running more than one daemon pod Dec 16 13:14:01.263: INFO: Number of nodes with available pods: 0 Dec 16 13:14:01.263: INFO: Node iruya-node is running more than one daemon pod Dec 16 13:14:02.260: INFO: Number of nodes with available pods: 0 Dec 16 13:14:02.261: INFO: Node iruya-node is running more than one daemon pod Dec 16 13:14:03.262: INFO: Number of nodes with available pods: 0 Dec 16 13:14:03.262: INFO: Node iruya-node is running more than one daemon pod Dec 16 13:14:04.262: INFO: Number of nodes with available pods: 0 Dec 16 13:14:04.262: INFO: Node iruya-node is running more than one daemon pod Dec 16 13:14:05.299: INFO: Number of nodes with available pods: 0 Dec 16 13:14:05.299: INFO: Node iruya-node is running more than one daemon pod Dec 16 13:14:06.266: INFO: Number of nodes with available pods: 0 Dec 16 13:14:06.266: INFO: Node iruya-node is running more than one daemon pod Dec 16 13:14:07.269: INFO: Number of nodes with available pods: 0 Dec 16 13:14:07.269: INFO: Node iruya-node is running more than one daemon pod Dec 16 13:14:08.269: INFO: Number of nodes with available pods: 0 Dec 16 13:14:08.269: INFO: Node iruya-node is running more than one daemon pod Dec 16 13:14:09.272: INFO: Number of nodes with available pods: 0 Dec 16 13:14:09.273: INFO: Node iruya-node is running more than one daemon pod Dec 16 13:14:10.297: INFO: Number of nodes with available pods: 0 Dec 16 13:14:10.297: INFO: Node iruya-node is running more than one daemon pod Dec 16 13:14:11.264: INFO: Number of nodes with available pods: 0 Dec 16 13:14:11.264: INFO: Node iruya-node is running more than one daemon pod Dec 16 13:14:12.322: INFO: Number of nodes with available pods: 0 Dec 16 13:14:12.322: INFO: Node iruya-node is running more than one daemon pod Dec 16 13:14:13.272: INFO: Number of nodes with available pods: 0 Dec 16 13:14:13.272: INFO: Node iruya-node is running more than one daemon pod Dec 16 13:14:14.275: INFO: Number of nodes with available pods: 1 Dec 16 13:14:14.275: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7678, will wait for the garbage collector to delete the pods Dec 16 13:14:14.359: INFO: Deleting DaemonSet.extensions daemon-set took: 15.35316ms Dec 16 13:14:14.659: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.501139ms Dec 16 13:14:26.796: INFO: Number of nodes with available pods: 0 Dec 16 13:14:26.796: INFO: Number of running nodes: 0, number of available pods: 0 Dec 16 13:14:26.812: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7678/daemonsets","resourceVersion":"16885955"},"items":null} Dec 16 13:14:26.817: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7678/pods","resourceVersion":"16885955"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:14:26.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7678" for this suite. Dec 16 13:14:32.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:14:33.068: INFO: namespace daemonsets-7678 deletion completed in 6.139050619s • [SLOW TEST:47.527 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:14:33.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 16 13:14:33.147: INFO: Creating ReplicaSet my-hostname-basic-a6505195-89d1-4573-8c24-9bbe5e648b5f Dec 16 13:14:33.244: INFO: Pod name my-hostname-basic-a6505195-89d1-4573-8c24-9bbe5e648b5f: Found 0 pods out of 1 Dec 16 13:14:38.254: INFO: Pod name my-hostname-basic-a6505195-89d1-4573-8c24-9bbe5e648b5f: Found 1 pods out of 1 Dec 16 13:14:38.254: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-a6505195-89d1-4573-8c24-9bbe5e648b5f" is running Dec 16 13:14:44.262: INFO: Pod "my-hostname-basic-a6505195-89d1-4573-8c24-9bbe5e648b5f-vwd2v" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-16 13:14:33 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-16 13:14:33 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-a6505195-89d1-4573-8c24-9bbe5e648b5f]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-16 13:14:33 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-a6505195-89d1-4573-8c24-9bbe5e648b5f]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-16 13:14:33 +0000 UTC Reason: Message:}]) Dec 16 13:14:44.262: INFO: Trying to dial the pod Dec 16 13:14:49.292: INFO: Controller my-hostname-basic-a6505195-89d1-4573-8c24-9bbe5e648b5f: Got expected result from replica 1 [my-hostname-basic-a6505195-89d1-4573-8c24-9bbe5e648b5f-vwd2v]: "my-hostname-basic-a6505195-89d1-4573-8c24-9bbe5e648b5f-vwd2v", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:14:49.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1121" for this suite. Dec 16 13:14:55.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:14:55.439: INFO: namespace replicaset-1121 deletion completed in 6.140896294s • [SLOW TEST:22.371 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:14:55.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-2106/configmap-test-5342cce9-1d81-497b-a20b-f86073b6bb07 STEP: Creating a pod to test consume configMaps Dec 16 13:14:55.703: INFO: Waiting up to 5m0s for pod "pod-configmaps-d4f7e681-c203-47f7-86a0-fbb0d6d771f4" in namespace "configmap-2106" to be "success or failure" Dec 16 13:14:55.728: INFO: Pod "pod-configmaps-d4f7e681-c203-47f7-86a0-fbb0d6d771f4": Phase="Pending", Reason="", readiness=false. Elapsed: 24.927868ms Dec 16 13:14:58.415: INFO: Pod "pod-configmaps-d4f7e681-c203-47f7-86a0-fbb0d6d771f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.711491836s Dec 16 13:15:00.431: INFO: Pod "pod-configmaps-d4f7e681-c203-47f7-86a0-fbb0d6d771f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.728157417s Dec 16 13:15:02.444: INFO: Pod "pod-configmaps-d4f7e681-c203-47f7-86a0-fbb0d6d771f4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.740898882s Dec 16 13:15:04.499: INFO: Pod "pod-configmaps-d4f7e681-c203-47f7-86a0-fbb0d6d771f4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.796125675s Dec 16 13:15:06.528: INFO: Pod "pod-configmaps-d4f7e681-c203-47f7-86a0-fbb0d6d771f4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.825005167s Dec 16 13:15:08.544: INFO: Pod "pod-configmaps-d4f7e681-c203-47f7-86a0-fbb0d6d771f4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.841092387s Dec 16 13:15:10.558: INFO: Pod "pod-configmaps-d4f7e681-c203-47f7-86a0-fbb0d6d771f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.85444529s STEP: Saw pod success Dec 16 13:15:10.558: INFO: Pod "pod-configmaps-d4f7e681-c203-47f7-86a0-fbb0d6d771f4" satisfied condition "success or failure" Dec 16 13:15:10.562: INFO: Trying to get logs from node iruya-node pod pod-configmaps-d4f7e681-c203-47f7-86a0-fbb0d6d771f4 container env-test: STEP: delete the pod Dec 16 13:15:10.669: INFO: Waiting for pod pod-configmaps-d4f7e681-c203-47f7-86a0-fbb0d6d771f4 to disappear Dec 16 13:15:10.674: INFO: Pod pod-configmaps-d4f7e681-c203-47f7-86a0-fbb0d6d771f4 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:15:10.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2106" for this suite. Dec 16 13:15:16.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:15:16.846: INFO: namespace configmap-2106 deletion completed in 6.165306494s • [SLOW TEST:21.406 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:15:16.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Dec 16 13:15:17.051: INFO: Waiting up to 5m0s for pod "downward-api-03ae2003-b431-425e-8351-059d4bf50368" in namespace "downward-api-3316" to be "success or failure" Dec 16 13:15:17.057: INFO: Pod "downward-api-03ae2003-b431-425e-8351-059d4bf50368": Phase="Pending", Reason="", readiness=false. Elapsed: 5.873721ms Dec 16 13:15:19.064: INFO: Pod "downward-api-03ae2003-b431-425e-8351-059d4bf50368": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012502814s Dec 16 13:15:21.069: INFO: Pod "downward-api-03ae2003-b431-425e-8351-059d4bf50368": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017775111s Dec 16 13:15:23.077: INFO: Pod "downward-api-03ae2003-b431-425e-8351-059d4bf50368": Phase="Pending", Reason="", readiness=false. Elapsed: 6.025597653s Dec 16 13:15:25.084: INFO: Pod "downward-api-03ae2003-b431-425e-8351-059d4bf50368": Phase="Pending", Reason="", readiness=false. Elapsed: 8.032611202s Dec 16 13:15:27.091: INFO: Pod "downward-api-03ae2003-b431-425e-8351-059d4bf50368": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.039863335s STEP: Saw pod success Dec 16 13:15:27.092: INFO: Pod "downward-api-03ae2003-b431-425e-8351-059d4bf50368" satisfied condition "success or failure" Dec 16 13:15:27.096: INFO: Trying to get logs from node iruya-node pod downward-api-03ae2003-b431-425e-8351-059d4bf50368 container dapi-container: STEP: delete the pod Dec 16 13:15:27.189: INFO: Waiting for pod downward-api-03ae2003-b431-425e-8351-059d4bf50368 to disappear Dec 16 13:15:27.197: INFO: Pod downward-api-03ae2003-b431-425e-8351-059d4bf50368 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:15:27.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3316" for this suite. Dec 16 13:15:33.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:15:33.408: INFO: namespace downward-api-3316 deletion completed in 6.20010781s • [SLOW TEST:16.560 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:15:33.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 16 13:15:33.520: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a669ac40-0a03-4ec4-8c45-152b949d8aa7" in namespace "projected-3159" to be "success or failure" Dec 16 13:15:33.529: INFO: Pod "downwardapi-volume-a669ac40-0a03-4ec4-8c45-152b949d8aa7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.445623ms Dec 16 13:15:35.719: INFO: Pod "downwardapi-volume-a669ac40-0a03-4ec4-8c45-152b949d8aa7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.197987031s Dec 16 13:15:37.728: INFO: Pod "downwardapi-volume-a669ac40-0a03-4ec4-8c45-152b949d8aa7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.207317053s Dec 16 13:15:39.741: INFO: Pod "downwardapi-volume-a669ac40-0a03-4ec4-8c45-152b949d8aa7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.220020445s Dec 16 13:15:41.749: INFO: Pod "downwardapi-volume-a669ac40-0a03-4ec4-8c45-152b949d8aa7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.228612303s Dec 16 13:15:43.763: INFO: Pod "downwardapi-volume-a669ac40-0a03-4ec4-8c45-152b949d8aa7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.241957733s STEP: Saw pod success Dec 16 13:15:43.763: INFO: Pod "downwardapi-volume-a669ac40-0a03-4ec4-8c45-152b949d8aa7" satisfied condition "success or failure" Dec 16 13:15:43.769: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a669ac40-0a03-4ec4-8c45-152b949d8aa7 container client-container: STEP: delete the pod Dec 16 13:15:43.848: INFO: Waiting for pod downwardapi-volume-a669ac40-0a03-4ec4-8c45-152b949d8aa7 to disappear Dec 16 13:15:43.858: INFO: Pod downwardapi-volume-a669ac40-0a03-4ec4-8c45-152b949d8aa7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:15:43.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3159" for this suite. Dec 16 13:15:50.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:15:50.143: INFO: namespace projected-3159 deletion completed in 6.158896197s • [SLOW TEST:16.735 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:15:50.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Dec 16 13:15:50.379: INFO: Waiting up to 5m0s for pod "pod-b5a9a950-2554-4da2-8120-8faa699dffeb" in namespace "emptydir-1895" to be "success or failure" Dec 16 13:15:50.412: INFO: Pod "pod-b5a9a950-2554-4da2-8120-8faa699dffeb": Phase="Pending", Reason="", readiness=false. Elapsed: 32.51286ms Dec 16 13:15:52.420: INFO: Pod "pod-b5a9a950-2554-4da2-8120-8faa699dffeb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040686723s Dec 16 13:15:54.430: INFO: Pod "pod-b5a9a950-2554-4da2-8120-8faa699dffeb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049925378s Dec 16 13:15:56.442: INFO: Pod "pod-b5a9a950-2554-4da2-8120-8faa699dffeb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062104061s Dec 16 13:15:58.457: INFO: Pod "pod-b5a9a950-2554-4da2-8120-8faa699dffeb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.076865508s STEP: Saw pod success Dec 16 13:15:58.457: INFO: Pod "pod-b5a9a950-2554-4da2-8120-8faa699dffeb" satisfied condition "success or failure" Dec 16 13:15:58.462: INFO: Trying to get logs from node iruya-node pod pod-b5a9a950-2554-4da2-8120-8faa699dffeb container test-container: STEP: delete the pod Dec 16 13:15:58.563: INFO: Waiting for pod pod-b5a9a950-2554-4da2-8120-8faa699dffeb to disappear Dec 16 13:15:58.651: INFO: Pod pod-b5a9a950-2554-4da2-8120-8faa699dffeb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:15:58.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1895" for this suite. Dec 16 13:16:04.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:16:04.953: INFO: namespace emptydir-1895 deletion completed in 6.291167175s • [SLOW TEST:14.810 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:16:04.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Dec 16 13:16:05.127: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:16:24.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5612" for this suite. Dec 16 13:16:52.465: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:16:52.673: INFO: namespace init-container-5612 deletion completed in 28.35609524s • [SLOW TEST:47.720 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:16:52.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-4ld9 STEP: Creating a pod to test atomic-volume-subpath Dec 16 13:16:52.896: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-4ld9" in namespace "subpath-1969" to be "success or failure" Dec 16 13:16:52.935: INFO: Pod "pod-subpath-test-configmap-4ld9": Phase="Pending", Reason="", readiness=false. Elapsed: 38.34387ms Dec 16 13:16:54.948: INFO: Pod "pod-subpath-test-configmap-4ld9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051889801s Dec 16 13:16:56.954: INFO: Pod "pod-subpath-test-configmap-4ld9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057244127s Dec 16 13:16:58.963: INFO: Pod "pod-subpath-test-configmap-4ld9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067032075s Dec 16 13:17:00.981: INFO: Pod "pod-subpath-test-configmap-4ld9": Phase="Running", Reason="", readiness=true. Elapsed: 8.084700541s Dec 16 13:17:02.987: INFO: Pod "pod-subpath-test-configmap-4ld9": Phase="Running", Reason="", readiness=true. Elapsed: 10.091050437s Dec 16 13:17:04.996: INFO: Pod "pod-subpath-test-configmap-4ld9": Phase="Running", Reason="", readiness=true. Elapsed: 12.099981079s Dec 16 13:17:07.014: INFO: Pod "pod-subpath-test-configmap-4ld9": Phase="Running", Reason="", readiness=true. Elapsed: 14.117487899s Dec 16 13:17:09.025: INFO: Pod "pod-subpath-test-configmap-4ld9": Phase="Running", Reason="", readiness=true. Elapsed: 16.129048878s Dec 16 13:17:11.037: INFO: Pod "pod-subpath-test-configmap-4ld9": Phase="Running", Reason="", readiness=true. Elapsed: 18.141237135s Dec 16 13:17:13.045: INFO: Pod "pod-subpath-test-configmap-4ld9": Phase="Running", Reason="", readiness=true. Elapsed: 20.149235891s Dec 16 13:17:15.053: INFO: Pod "pod-subpath-test-configmap-4ld9": Phase="Running", Reason="", readiness=true. Elapsed: 22.156530911s Dec 16 13:17:17.059: INFO: Pod "pod-subpath-test-configmap-4ld9": Phase="Running", Reason="", readiness=true. Elapsed: 24.162495277s Dec 16 13:17:19.780: INFO: Pod "pod-subpath-test-configmap-4ld9": Phase="Running", Reason="", readiness=true. Elapsed: 26.884054514s Dec 16 13:17:21.808: INFO: Pod "pod-subpath-test-configmap-4ld9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.911708046s STEP: Saw pod success Dec 16 13:17:21.808: INFO: Pod "pod-subpath-test-configmap-4ld9" satisfied condition "success or failure" Dec 16 13:17:21.818: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-4ld9 container test-container-subpath-configmap-4ld9: STEP: delete the pod Dec 16 13:17:22.184: INFO: Waiting for pod pod-subpath-test-configmap-4ld9 to disappear Dec 16 13:17:22.233: INFO: Pod pod-subpath-test-configmap-4ld9 no longer exists STEP: Deleting pod pod-subpath-test-configmap-4ld9 Dec 16 13:17:22.234: INFO: Deleting pod "pod-subpath-test-configmap-4ld9" in namespace "subpath-1969" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:17:22.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1969" for this suite. Dec 16 13:17:28.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:17:28.487: INFO: namespace subpath-1969 deletion completed in 6.140261014s • [SLOW TEST:35.813 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:17:28.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Dec 16 13:17:28.618: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix833739910/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:17:28.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2802" for this suite. Dec 16 13:17:34.832: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:17:34.967: INFO: namespace kubectl-2802 deletion completed in 6.169608339s • [SLOW TEST:6.480 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:17:34.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 16 13:17:35.099: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:17:43.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3426" for this suite. Dec 16 13:18:25.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:18:25.378: INFO: namespace pods-3426 deletion completed in 42.167250592s • [SLOW TEST:50.411 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:18:25.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Dec 16 13:18:25.659: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5503,SelfLink:/api/v1/namespaces/watch-5503/configmaps/e2e-watch-test-label-changed,UID:baf3160e-a334-4bf2-8f00-766e8d94b668,ResourceVersion:16886515,Generation:0,CreationTimestamp:2019-12-16 13:18:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 16 13:18:25.659: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5503,SelfLink:/api/v1/namespaces/watch-5503/configmaps/e2e-watch-test-label-changed,UID:baf3160e-a334-4bf2-8f00-766e8d94b668,ResourceVersion:16886516,Generation:0,CreationTimestamp:2019-12-16 13:18:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Dec 16 13:18:25.660: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5503,SelfLink:/api/v1/namespaces/watch-5503/configmaps/e2e-watch-test-label-changed,UID:baf3160e-a334-4bf2-8f00-766e8d94b668,ResourceVersion:16886517,Generation:0,CreationTimestamp:2019-12-16 13:18:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Dec 16 13:18:35.782: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5503,SelfLink:/api/v1/namespaces/watch-5503/configmaps/e2e-watch-test-label-changed,UID:baf3160e-a334-4bf2-8f00-766e8d94b668,ResourceVersion:16886533,Generation:0,CreationTimestamp:2019-12-16 13:18:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 16 13:18:35.783: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5503,SelfLink:/api/v1/namespaces/watch-5503/configmaps/e2e-watch-test-label-changed,UID:baf3160e-a334-4bf2-8f00-766e8d94b668,ResourceVersion:16886534,Generation:0,CreationTimestamp:2019-12-16 13:18:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Dec 16 13:18:35.783: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5503,SelfLink:/api/v1/namespaces/watch-5503/configmaps/e2e-watch-test-label-changed,UID:baf3160e-a334-4bf2-8f00-766e8d94b668,ResourceVersion:16886535,Generation:0,CreationTimestamp:2019-12-16 13:18:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:18:35.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5503" for this suite. Dec 16 13:18:41.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:18:42.028: INFO: namespace watch-5503 deletion completed in 6.235088021s • [SLOW TEST:16.650 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:18:42.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-1784/configmap-test-20ded34d-d4ff-411a-aa1b-d10dd2405d74 STEP: Creating a pod to test consume configMaps Dec 16 13:18:42.101: INFO: Waiting up to 5m0s for pod "pod-configmaps-2a5fd20e-9922-4f8c-b06d-2eea020bf890" in namespace "configmap-1784" to be "success or failure" Dec 16 13:18:42.108: INFO: Pod "pod-configmaps-2a5fd20e-9922-4f8c-b06d-2eea020bf890": Phase="Pending", Reason="", readiness=false. Elapsed: 6.28006ms Dec 16 13:18:44.114: INFO: Pod "pod-configmaps-2a5fd20e-9922-4f8c-b06d-2eea020bf890": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012728245s Dec 16 13:18:46.121: INFO: Pod "pod-configmaps-2a5fd20e-9922-4f8c-b06d-2eea020bf890": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019401033s Dec 16 13:18:48.169: INFO: Pod "pod-configmaps-2a5fd20e-9922-4f8c-b06d-2eea020bf890": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067408039s Dec 16 13:18:50.182: INFO: Pod "pod-configmaps-2a5fd20e-9922-4f8c-b06d-2eea020bf890": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.080837666s STEP: Saw pod success Dec 16 13:18:50.183: INFO: Pod "pod-configmaps-2a5fd20e-9922-4f8c-b06d-2eea020bf890" satisfied condition "success or failure" Dec 16 13:18:50.189: INFO: Trying to get logs from node iruya-node pod pod-configmaps-2a5fd20e-9922-4f8c-b06d-2eea020bf890 container env-test: STEP: delete the pod Dec 16 13:18:50.410: INFO: Waiting for pod pod-configmaps-2a5fd20e-9922-4f8c-b06d-2eea020bf890 to disappear Dec 16 13:18:50.416: INFO: Pod pod-configmaps-2a5fd20e-9922-4f8c-b06d-2eea020bf890 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:18:50.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1784" for this suite. Dec 16 13:18:56.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:18:56.660: INFO: namespace configmap-1784 deletion completed in 6.236605029s • [SLOW TEST:14.632 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:18:56.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-78b96984-93cb-4af4-aecd-379ff00e40c9 in namespace container-probe-8577 Dec 16 13:19:04.868: INFO: Started pod busybox-78b96984-93cb-4af4-aecd-379ff00e40c9 in namespace container-probe-8577 STEP: checking the pod's current state and verifying that restartCount is present Dec 16 13:19:04.871: INFO: Initial restart count of pod busybox-78b96984-93cb-4af4-aecd-379ff00e40c9 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:23:06.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8577" for this suite. Dec 16 13:23:12.735: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:23:12.873: INFO: namespace container-probe-8577 deletion completed in 6.191259083s • [SLOW TEST:256.212 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:23:12.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-a2f3c083-5b51-4c9f-8cf9-6c12b4194b3e STEP: Creating a pod to test consume configMaps Dec 16 13:23:13.000: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7ebbd650-bd1b-4ce5-82d7-904028418091" in namespace "projected-7182" to be "success or failure" Dec 16 13:23:13.013: INFO: Pod "pod-projected-configmaps-7ebbd650-bd1b-4ce5-82d7-904028418091": Phase="Pending", Reason="", readiness=false. Elapsed: 13.347438ms Dec 16 13:23:15.022: INFO: Pod "pod-projected-configmaps-7ebbd650-bd1b-4ce5-82d7-904028418091": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022053269s Dec 16 13:23:17.032: INFO: Pod "pod-projected-configmaps-7ebbd650-bd1b-4ce5-82d7-904028418091": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031989784s Dec 16 13:23:19.039: INFO: Pod "pod-projected-configmaps-7ebbd650-bd1b-4ce5-82d7-904028418091": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039533396s Dec 16 13:23:21.054: INFO: Pod "pod-projected-configmaps-7ebbd650-bd1b-4ce5-82d7-904028418091": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05389857s STEP: Saw pod success Dec 16 13:23:21.054: INFO: Pod "pod-projected-configmaps-7ebbd650-bd1b-4ce5-82d7-904028418091" satisfied condition "success or failure" Dec 16 13:23:21.058: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-7ebbd650-bd1b-4ce5-82d7-904028418091 container projected-configmap-volume-test: STEP: delete the pod Dec 16 13:23:21.129: INFO: Waiting for pod pod-projected-configmaps-7ebbd650-bd1b-4ce5-82d7-904028418091 to disappear Dec 16 13:23:21.138: INFO: Pod pod-projected-configmaps-7ebbd650-bd1b-4ce5-82d7-904028418091 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:23:21.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7182" for this suite. Dec 16 13:23:27.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:23:27.390: INFO: namespace projected-7182 deletion completed in 6.246752917s • [SLOW TEST:14.516 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:23:27.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:23:33.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3736" for this suite. Dec 16 13:23:39.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:23:39.386: INFO: namespace watch-3736 deletion completed in 6.21452153s • [SLOW TEST:11.995 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:23:39.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-6znw STEP: Creating a pod to test atomic-volume-subpath Dec 16 13:23:39.581: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-6znw" in namespace "subpath-7479" to be "success or failure" Dec 16 13:23:39.712: INFO: Pod "pod-subpath-test-secret-6znw": Phase="Pending", Reason="", readiness=false. Elapsed: 131.116177ms Dec 16 13:23:41.723: INFO: Pod "pod-subpath-test-secret-6znw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.141230025s Dec 16 13:23:43.739: INFO: Pod "pod-subpath-test-secret-6znw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157480644s Dec 16 13:23:45.750: INFO: Pod "pod-subpath-test-secret-6znw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.168622041s Dec 16 13:23:47.758: INFO: Pod "pod-subpath-test-secret-6znw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.177021178s Dec 16 13:23:49.767: INFO: Pod "pod-subpath-test-secret-6znw": Phase="Running", Reason="", readiness=true. Elapsed: 10.185967946s Dec 16 13:23:51.783: INFO: Pod "pod-subpath-test-secret-6znw": Phase="Running", Reason="", readiness=true. Elapsed: 12.201320419s Dec 16 13:23:53.803: INFO: Pod "pod-subpath-test-secret-6znw": Phase="Running", Reason="", readiness=true. Elapsed: 14.222066093s Dec 16 13:23:55.815: INFO: Pod "pod-subpath-test-secret-6znw": Phase="Running", Reason="", readiness=true. Elapsed: 16.234182485s Dec 16 13:23:57.828: INFO: Pod "pod-subpath-test-secret-6znw": Phase="Running", Reason="", readiness=true. Elapsed: 18.246522515s Dec 16 13:23:59.836: INFO: Pod "pod-subpath-test-secret-6znw": Phase="Running", Reason="", readiness=true. Elapsed: 20.254228817s Dec 16 13:24:01.846: INFO: Pod "pod-subpath-test-secret-6znw": Phase="Running", Reason="", readiness=true. Elapsed: 22.264935315s Dec 16 13:24:03.857: INFO: Pod "pod-subpath-test-secret-6znw": Phase="Running", Reason="", readiness=true. Elapsed: 24.275914666s Dec 16 13:24:05.872: INFO: Pod "pod-subpath-test-secret-6znw": Phase="Running", Reason="", readiness=true. Elapsed: 26.290776417s Dec 16 13:24:07.883: INFO: Pod "pod-subpath-test-secret-6znw": Phase="Running", Reason="", readiness=true. Elapsed: 28.302199052s Dec 16 13:24:09.899: INFO: Pod "pod-subpath-test-secret-6znw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.3175229s STEP: Saw pod success Dec 16 13:24:09.899: INFO: Pod "pod-subpath-test-secret-6znw" satisfied condition "success or failure" Dec 16 13:24:09.906: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-6znw container test-container-subpath-secret-6znw: STEP: delete the pod Dec 16 13:24:10.023: INFO: Waiting for pod pod-subpath-test-secret-6znw to disappear Dec 16 13:24:10.030: INFO: Pod pod-subpath-test-secret-6znw no longer exists STEP: Deleting pod pod-subpath-test-secret-6znw Dec 16 13:24:10.031: INFO: Deleting pod "pod-subpath-test-secret-6znw" in namespace "subpath-7479" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:24:10.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7479" for this suite. Dec 16 13:24:16.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:24:16.170: INFO: namespace subpath-7479 deletion completed in 6.127061212s • [SLOW TEST:36.784 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:24:16.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:25:06.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-324" for this suite. Dec 16 13:25:12.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:25:12.835: INFO: namespace container-runtime-324 deletion completed in 6.159864242s • [SLOW TEST:56.665 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:25:12.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 16 13:25:12.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-4364' Dec 16 13:25:15.452: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 16 13:25:15.452: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Dec 16 13:25:15.462: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Dec 16 13:25:15.487: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Dec 16 13:25:15.510: INFO: scanned /root for discovery docs: Dec 16 13:25:15.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-4364' Dec 16 13:25:37.965: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Dec 16 13:25:37.965: INFO: stdout: "Created e2e-test-nginx-rc-f510a2b7a8c7e005d2742b884c3f14f0\nScaling up e2e-test-nginx-rc-f510a2b7a8c7e005d2742b884c3f14f0 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-f510a2b7a8c7e005d2742b884c3f14f0 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-f510a2b7a8c7e005d2742b884c3f14f0 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Dec 16 13:25:37.965: INFO: stdout: "Created e2e-test-nginx-rc-f510a2b7a8c7e005d2742b884c3f14f0\nScaling up e2e-test-nginx-rc-f510a2b7a8c7e005d2742b884c3f14f0 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-f510a2b7a8c7e005d2742b884c3f14f0 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-f510a2b7a8c7e005d2742b884c3f14f0 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Dec 16 13:25:37.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-4364' Dec 16 13:25:38.191: INFO: stderr: "" Dec 16 13:25:38.191: INFO: stdout: "e2e-test-nginx-rc-f510a2b7a8c7e005d2742b884c3f14f0-h2v7m e2e-test-nginx-rc-qt9ws " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Dec 16 13:25:43.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-4364' Dec 16 13:25:43.399: INFO: stderr: "" Dec 16 13:25:43.400: INFO: stdout: "e2e-test-nginx-rc-f510a2b7a8c7e005d2742b884c3f14f0-h2v7m e2e-test-nginx-rc-qt9ws " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Dec 16 13:25:48.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-4364' Dec 16 13:25:48.607: INFO: stderr: "" Dec 16 13:25:48.607: INFO: stdout: "e2e-test-nginx-rc-f510a2b7a8c7e005d2742b884c3f14f0-h2v7m e2e-test-nginx-rc-qt9ws " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Dec 16 13:25:53.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-4364' Dec 16 13:25:53.772: INFO: stderr: "" Dec 16 13:25:53.772: INFO: stdout: "e2e-test-nginx-rc-f510a2b7a8c7e005d2742b884c3f14f0-h2v7m e2e-test-nginx-rc-qt9ws " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Dec 16 13:25:58.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-4364' Dec 16 13:25:59.019: INFO: stderr: "" Dec 16 13:25:59.019: INFO: stdout: "e2e-test-nginx-rc-f510a2b7a8c7e005d2742b884c3f14f0-h2v7m e2e-test-nginx-rc-qt9ws " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Dec 16 13:26:04.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-4364' Dec 16 13:26:04.232: INFO: stderr: "" Dec 16 13:26:04.232: INFO: stdout: "e2e-test-nginx-rc-f510a2b7a8c7e005d2742b884c3f14f0-h2v7m e2e-test-nginx-rc-qt9ws " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Dec 16 13:26:09.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-4364' Dec 16 13:26:09.384: INFO: stderr: "" Dec 16 13:26:09.384: INFO: stdout: "e2e-test-nginx-rc-f510a2b7a8c7e005d2742b884c3f14f0-h2v7m e2e-test-nginx-rc-qt9ws " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Dec 16 13:26:14.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-4364' Dec 16 13:26:14.646: INFO: stderr: "" Dec 16 13:26:14.646: INFO: stdout: "e2e-test-nginx-rc-f510a2b7a8c7e005d2742b884c3f14f0-h2v7m e2e-test-nginx-rc-qt9ws " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Dec 16 13:26:19.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-4364' Dec 16 13:26:19.874: INFO: stderr: "" Dec 16 13:26:19.874: INFO: stdout: "e2e-test-nginx-rc-f510a2b7a8c7e005d2742b884c3f14f0-h2v7m e2e-test-nginx-rc-qt9ws " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Dec 16 13:26:24.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-4364' Dec 16 13:26:25.070: INFO: stderr: "" Dec 16 13:26:25.070: INFO: stdout: "e2e-test-nginx-rc-f510a2b7a8c7e005d2742b884c3f14f0-h2v7m e2e-test-nginx-rc-qt9ws " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Dec 16 13:26:30.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-4364' Dec 16 13:26:30.275: INFO: stderr: "" Dec 16 13:26:30.275: INFO: stdout: "e2e-test-nginx-rc-f510a2b7a8c7e005d2742b884c3f14f0-h2v7m e2e-test-nginx-rc-qt9ws " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Dec 16 13:26:35.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-4364' Dec 16 13:26:35.462: INFO: stderr: "" Dec 16 13:26:35.462: INFO: stdout: "e2e-test-nginx-rc-f510a2b7a8c7e005d2742b884c3f14f0-h2v7m e2e-test-nginx-rc-qt9ws " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Dec 16 13:26:40.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-4364' Dec 16 13:26:40.694: INFO: stderr: "" Dec 16 13:26:40.695: INFO: stdout: "e2e-test-nginx-rc-f510a2b7a8c7e005d2742b884c3f14f0-h2v7m e2e-test-nginx-rc-qt9ws " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Dec 16 13:26:45.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-4364' Dec 16 13:26:45.937: INFO: stderr: "" Dec 16 13:26:45.937: INFO: stdout: "e2e-test-nginx-rc-f510a2b7a8c7e005d2742b884c3f14f0-h2v7m e2e-test-nginx-rc-qt9ws " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Dec 16 13:26:50.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-4364' Dec 16 13:26:51.079: INFO: stderr: "" Dec 16 13:26:51.080: INFO: stdout: "e2e-test-nginx-rc-f510a2b7a8c7e005d2742b884c3f14f0-h2v7m e2e-test-nginx-rc-qt9ws " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Dec 16 13:26:56.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-4364' Dec 16 13:26:56.218: INFO: stderr: "" Dec 16 13:26:56.218: INFO: stdout: "e2e-test-nginx-rc-f510a2b7a8c7e005d2742b884c3f14f0-h2v7m e2e-test-nginx-rc-qt9ws " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Dec 16 13:27:01.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-4364' Dec 16 13:27:01.388: INFO: stderr: "" Dec 16 13:27:01.388: INFO: stdout: "e2e-test-nginx-rc-f510a2b7a8c7e005d2742b884c3f14f0-h2v7m " Dec 16 13:27:01.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-f510a2b7a8c7e005d2742b884c3f14f0-h2v7m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4364' Dec 16 13:27:01.584: INFO: stderr: "" Dec 16 13:27:01.585: INFO: stdout: "true" Dec 16 13:27:01.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-f510a2b7a8c7e005d2742b884c3f14f0-h2v7m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4364' Dec 16 13:27:01.738: INFO: stderr: "" Dec 16 13:27:01.739: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Dec 16 13:27:01.739: INFO: e2e-test-nginx-rc-f510a2b7a8c7e005d2742b884c3f14f0-h2v7m is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Dec 16 13:27:01.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-4364' Dec 16 13:27:01.922: INFO: stderr: "" Dec 16 13:27:01.922: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:27:01.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4364" for this suite. Dec 16 13:27:07.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:27:08.076: INFO: namespace kubectl-4364 deletion completed in 6.147001078s • [SLOW TEST:115.241 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:27:08.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-bdf3fdd9-237e-42ce-a766-556ab495b9fc in namespace container-probe-4712 Dec 16 13:27:16.336: INFO: Started pod busybox-bdf3fdd9-237e-42ce-a766-556ab495b9fc in namespace container-probe-4712 STEP: checking the pod's current state and verifying that restartCount is present Dec 16 13:27:16.340: INFO: Initial restart count of pod busybox-bdf3fdd9-237e-42ce-a766-556ab495b9fc is 0 Dec 16 13:28:13.031: INFO: Restart count of pod container-probe-4712/busybox-bdf3fdd9-237e-42ce-a766-556ab495b9fc is now 1 (56.691079912s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:28:13.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4712" for this suite. Dec 16 13:28:19.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:28:19.219: INFO: namespace container-probe-4712 deletion completed in 6.107968315s • [SLOW TEST:71.142 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:28:19.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 16 13:28:19.386: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Dec 16 13:28:22.802: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:28:22.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7531" for this suite. Dec 16 13:28:35.128: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:28:35.245: INFO: namespace replication-controller-7531 deletion completed in 12.17458119s • [SLOW TEST:16.026 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:28:35.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-ce11cf2a-eeda-4b67-a073-009c2a1722b5 STEP: Creating a pod to test consume secrets Dec 16 13:28:35.389: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a2e57b1c-8651-4c62-91d4-fe3caf433713" in namespace "projected-7787" to be "success or failure" Dec 16 13:28:35.394: INFO: Pod "pod-projected-secrets-a2e57b1c-8651-4c62-91d4-fe3caf433713": Phase="Pending", Reason="", readiness=false. Elapsed: 5.352984ms Dec 16 13:28:37.400: INFO: Pod "pod-projected-secrets-a2e57b1c-8651-4c62-91d4-fe3caf433713": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011083154s Dec 16 13:28:39.408: INFO: Pod "pod-projected-secrets-a2e57b1c-8651-4c62-91d4-fe3caf433713": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019644526s Dec 16 13:28:41.419: INFO: Pod "pod-projected-secrets-a2e57b1c-8651-4c62-91d4-fe3caf433713": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030150133s Dec 16 13:28:43.427: INFO: Pod "pod-projected-secrets-a2e57b1c-8651-4c62-91d4-fe3caf433713": Phase="Pending", Reason="", readiness=false. Elapsed: 8.038175291s Dec 16 13:28:45.434: INFO: Pod "pod-projected-secrets-a2e57b1c-8651-4c62-91d4-fe3caf433713": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.04517529s STEP: Saw pod success Dec 16 13:28:45.434: INFO: Pod "pod-projected-secrets-a2e57b1c-8651-4c62-91d4-fe3caf433713" satisfied condition "success or failure" Dec 16 13:28:45.439: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-a2e57b1c-8651-4c62-91d4-fe3caf433713 container projected-secret-volume-test: STEP: delete the pod Dec 16 13:28:45.511: INFO: Waiting for pod pod-projected-secrets-a2e57b1c-8651-4c62-91d4-fe3caf433713 to disappear Dec 16 13:28:45.555: INFO: Pod pod-projected-secrets-a2e57b1c-8651-4c62-91d4-fe3caf433713 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:28:45.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7787" for this suite. Dec 16 13:28:51.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:28:51.735: INFO: namespace projected-7787 deletion completed in 6.17288245s • [SLOW TEST:16.489 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:28:51.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-4c47a570-940b-435c-85c3-5993d48ef42d STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-4c47a570-940b-435c-85c3-5993d48ef42d STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:29:02.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7466" for this suite. Dec 16 13:29:24.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:29:24.552: INFO: namespace projected-7466 deletion completed in 22.490995688s • [SLOW TEST:32.816 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:29:24.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 16 13:29:24.710: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Dec 16 13:29:24.753: INFO: Number of nodes with available pods: 0 Dec 16 13:29:24.753: INFO: Node iruya-node is running more than one daemon pod Dec 16 13:29:25.775: INFO: Number of nodes with available pods: 0 Dec 16 13:29:25.775: INFO: Node iruya-node is running more than one daemon pod Dec 16 13:29:27.287: INFO: Number of nodes with available pods: 0 Dec 16 13:29:27.287: INFO: Node iruya-node is running more than one daemon pod Dec 16 13:29:27.792: INFO: Number of nodes with available pods: 0 Dec 16 13:29:27.792: INFO: Node iruya-node is running more than one daemon pod Dec 16 13:29:28.777: INFO: Number of nodes with available pods: 0 Dec 16 13:29:28.777: INFO: Node iruya-node is running more than one daemon pod Dec 16 13:29:29.868: INFO: Number of nodes with available pods: 0 Dec 16 13:29:29.869: INFO: Node iruya-node is running more than one daemon pod Dec 16 13:29:32.073: INFO: Number of nodes with available pods: 0 Dec 16 13:29:32.073: INFO: Node iruya-node is running more than one daemon pod Dec 16 13:29:33.061: INFO: Number of nodes with available pods: 1 Dec 16 13:29:33.061: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 16 13:29:33.804: INFO: Number of nodes with available pods: 1 Dec 16 13:29:33.804: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 16 13:29:34.768: INFO: Number of nodes with available pods: 2 Dec 16 13:29:34.768: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Dec 16 13:29:34.833: INFO: Wrong image for pod: daemon-set-nb8l2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:29:34.833: INFO: Wrong image for pod: daemon-set-znlgr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:29:35.909: INFO: Wrong image for pod: daemon-set-nb8l2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:29:35.910: INFO: Wrong image for pod: daemon-set-znlgr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:29:36.913: INFO: Wrong image for pod: daemon-set-nb8l2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:29:36.913: INFO: Wrong image for pod: daemon-set-znlgr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:29:37.910: INFO: Wrong image for pod: daemon-set-nb8l2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:29:37.910: INFO: Wrong image for pod: daemon-set-znlgr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:29:38.908: INFO: Wrong image for pod: daemon-set-nb8l2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:29:38.908: INFO: Wrong image for pod: daemon-set-znlgr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:29:39.912: INFO: Wrong image for pod: daemon-set-nb8l2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:29:39.912: INFO: Wrong image for pod: daemon-set-znlgr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:29:40.907: INFO: Wrong image for pod: daemon-set-nb8l2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:29:40.907: INFO: Pod daemon-set-nb8l2 is not available Dec 16 13:29:40.907: INFO: Wrong image for pod: daemon-set-znlgr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:29:41.907: INFO: Wrong image for pod: daemon-set-nb8l2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:29:41.907: INFO: Pod daemon-set-nb8l2 is not available Dec 16 13:29:41.907: INFO: Wrong image for pod: daemon-set-znlgr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:29:42.910: INFO: Wrong image for pod: daemon-set-nb8l2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:29:42.910: INFO: Pod daemon-set-nb8l2 is not available Dec 16 13:29:42.910: INFO: Wrong image for pod: daemon-set-znlgr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:29:43.911: INFO: Wrong image for pod: daemon-set-nb8l2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:29:43.912: INFO: Pod daemon-set-nb8l2 is not available Dec 16 13:29:43.912: INFO: Wrong image for pod: daemon-set-znlgr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:29:44.911: INFO: Wrong image for pod: daemon-set-nb8l2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:29:44.911: INFO: Pod daemon-set-nb8l2 is not available Dec 16 13:29:44.911: INFO: Wrong image for pod: daemon-set-znlgr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:29:45.937: INFO: Wrong image for pod: daemon-set-nb8l2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:29:45.937: INFO: Pod daemon-set-nb8l2 is not available Dec 16 13:29:45.937: INFO: Wrong image for pod: daemon-set-znlgr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:29:46.913: INFO: Wrong image for pod: daemon-set-nb8l2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:29:46.913: INFO: Pod daemon-set-nb8l2 is not available Dec 16 13:29:46.913: INFO: Wrong image for pod: daemon-set-znlgr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:29:47.930: INFO: Pod daemon-set-4sb76 is not available Dec 16 13:29:47.931: INFO: Wrong image for pod: daemon-set-znlgr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:29:48.924: INFO: Pod daemon-set-4sb76 is not available Dec 16 13:29:48.924: INFO: Wrong image for pod: daemon-set-znlgr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:29:50.141: INFO: Pod daemon-set-4sb76 is not available Dec 16 13:29:50.141: INFO: Wrong image for pod: daemon-set-znlgr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:29:50.907: INFO: Pod daemon-set-4sb76 is not available Dec 16 13:29:50.907: INFO: Wrong image for pod: daemon-set-znlgr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:29:51.909: INFO: Pod daemon-set-4sb76 is not available Dec 16 13:29:51.909: INFO: Wrong image for pod: daemon-set-znlgr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:29:53.484: INFO: Pod daemon-set-4sb76 is not available Dec 16 13:29:53.484: INFO: Wrong image for pod: daemon-set-znlgr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:29:54.033: INFO: Pod daemon-set-4sb76 is not available Dec 16 13:29:54.033: INFO: Wrong image for pod: daemon-set-znlgr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:29:54.911: INFO: Pod daemon-set-4sb76 is not available Dec 16 13:29:54.911: INFO: Wrong image for pod: daemon-set-znlgr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:29:55.907: INFO: Pod daemon-set-4sb76 is not available Dec 16 13:29:55.907: INFO: Wrong image for pod: daemon-set-znlgr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:29:56.910: INFO: Pod daemon-set-4sb76 is not available Dec 16 13:29:56.910: INFO: Wrong image for pod: daemon-set-znlgr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:29:57.909: INFO: Wrong image for pod: daemon-set-znlgr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:29:58.913: INFO: Wrong image for pod: daemon-set-znlgr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:29:59.913: INFO: Wrong image for pod: daemon-set-znlgr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:30:00.915: INFO: Wrong image for pod: daemon-set-znlgr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:30:01.909: INFO: Wrong image for pod: daemon-set-znlgr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:30:01.910: INFO: Pod daemon-set-znlgr is not available Dec 16 13:30:02.909: INFO: Wrong image for pod: daemon-set-znlgr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:30:02.909: INFO: Pod daemon-set-znlgr is not available Dec 16 13:30:03.913: INFO: Wrong image for pod: daemon-set-znlgr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:30:03.914: INFO: Pod daemon-set-znlgr is not available Dec 16 13:30:04.911: INFO: Wrong image for pod: daemon-set-znlgr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:30:04.911: INFO: Pod daemon-set-znlgr is not available Dec 16 13:30:05.906: INFO: Wrong image for pod: daemon-set-znlgr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 13:30:05.906: INFO: Pod daemon-set-znlgr is not available Dec 16 13:30:06.923: INFO: Pod daemon-set-8pxxm is not available STEP: Check that daemon pods are still running on every node of the cluster. Dec 16 13:30:06.947: INFO: Number of nodes with available pods: 1 Dec 16 13:30:06.947: INFO: Node iruya-node is running more than one daemon pod Dec 16 13:30:07.966: INFO: Number of nodes with available pods: 1 Dec 16 13:30:07.966: INFO: Node iruya-node is running more than one daemon pod Dec 16 13:30:08.964: INFO: Number of nodes with available pods: 1 Dec 16 13:30:08.964: INFO: Node iruya-node is running more than one daemon pod Dec 16 13:30:09.966: INFO: Number of nodes with available pods: 1 Dec 16 13:30:09.967: INFO: Node iruya-node is running more than one daemon pod Dec 16 13:30:10.973: INFO: Number of nodes with available pods: 1 Dec 16 13:30:10.973: INFO: Node iruya-node is running more than one daemon pod Dec 16 13:30:11.966: INFO: Number of nodes with available pods: 1 Dec 16 13:30:11.966: INFO: Node iruya-node is running more than one daemon pod Dec 16 13:30:12.988: INFO: Number of nodes with available pods: 1 Dec 16 13:30:12.988: INFO: Node iruya-node is running more than one daemon pod Dec 16 13:30:13.963: INFO: Number of nodes with available pods: 2 Dec 16 13:30:13.963: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8777, will wait for the garbage collector to delete the pods Dec 16 13:30:14.053: INFO: Deleting DaemonSet.extensions daemon-set took: 20.499202ms Dec 16 13:30:14.454: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.741734ms Dec 16 13:30:20.860: INFO: Number of nodes with available pods: 0 Dec 16 13:30:20.860: INFO: Number of running nodes: 0, number of available pods: 0 Dec 16 13:30:20.864: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8777/daemonsets","resourceVersion":"16888060"},"items":null} Dec 16 13:30:20.867: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8777/pods","resourceVersion":"16888060"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:30:20.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8777" for this suite. Dec 16 13:30:28.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:30:29.039: INFO: namespace daemonsets-8777 deletion completed in 8.134612819s • [SLOW TEST:64.486 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:30:29.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:30:29.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7427" for this suite. Dec 16 13:30:35.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:30:35.313: INFO: namespace services-7427 deletion completed in 6.208396309s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.273 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:30:35.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Dec 16 13:30:35.489: INFO: Pod name pod-release: Found 0 pods out of 1 Dec 16 13:30:40.502: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:30:41.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9023" for this suite. Dec 16 13:30:47.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:30:47.741: INFO: namespace replication-controller-9023 deletion completed in 6.174155937s • [SLOW TEST:12.428 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:30:47.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 16 13:30:49.348: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:30:57.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1492" for this suite. Dec 16 13:31:48.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:31:48.185: INFO: namespace pods-1492 deletion completed in 50.197378464s • [SLOW TEST:60.443 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:31:48.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Dec 16 13:31:48.307: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-2687" to be "success or failure" Dec 16 13:31:48.317: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.042574ms Dec 16 13:31:50.328: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020895388s Dec 16 13:31:52.340: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032822182s Dec 16 13:31:54.354: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04723736s Dec 16 13:31:56.365: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.058481418s Dec 16 13:31:58.382: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.074945085s Dec 16 13:32:00.439: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.131955854s STEP: Saw pod success Dec 16 13:32:00.439: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Dec 16 13:32:00.444: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: STEP: delete the pod Dec 16 13:32:00.664: INFO: Waiting for pod pod-host-path-test to disappear Dec 16 13:32:00.672: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:32:00.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-2687" for this suite. Dec 16 13:32:06.724: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:32:06.877: INFO: namespace hostpath-2687 deletion completed in 6.198024499s • [SLOW TEST:18.692 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:32:06.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-42903291-673b-4fe6-94e3-3199ae3e66be [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:32:06.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8810" for this suite. Dec 16 13:32:13.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:32:13.155: INFO: namespace configmap-8810 deletion completed in 6.14277143s • [SLOW TEST:6.278 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:32:13.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-aa6fd019-9434-4225-ac6f-1e7e46e6ba64 STEP: Creating a pod to test consume configMaps Dec 16 13:32:13.236: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9bba77ae-bfa5-44e2-b491-90fafd5e5d8a" in namespace "projected-5941" to be "success or failure" Dec 16 13:32:13.256: INFO: Pod "pod-projected-configmaps-9bba77ae-bfa5-44e2-b491-90fafd5e5d8a": Phase="Pending", Reason="", readiness=false. Elapsed: 20.257709ms Dec 16 13:32:15.265: INFO: Pod "pod-projected-configmaps-9bba77ae-bfa5-44e2-b491-90fafd5e5d8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029367624s Dec 16 13:32:17.280: INFO: Pod "pod-projected-configmaps-9bba77ae-bfa5-44e2-b491-90fafd5e5d8a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043791873s Dec 16 13:32:19.288: INFO: Pod "pod-projected-configmaps-9bba77ae-bfa5-44e2-b491-90fafd5e5d8a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05215149s Dec 16 13:32:21.297: INFO: Pod "pod-projected-configmaps-9bba77ae-bfa5-44e2-b491-90fafd5e5d8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.060969293s STEP: Saw pod success Dec 16 13:32:21.297: INFO: Pod "pod-projected-configmaps-9bba77ae-bfa5-44e2-b491-90fafd5e5d8a" satisfied condition "success or failure" Dec 16 13:32:21.301: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-9bba77ae-bfa5-44e2-b491-90fafd5e5d8a container projected-configmap-volume-test: STEP: delete the pod Dec 16 13:32:21.430: INFO: Waiting for pod pod-projected-configmaps-9bba77ae-bfa5-44e2-b491-90fafd5e5d8a to disappear Dec 16 13:32:21.435: INFO: Pod pod-projected-configmaps-9bba77ae-bfa5-44e2-b491-90fafd5e5d8a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:32:21.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5941" for this suite. Dec 16 13:32:27.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:32:27.627: INFO: namespace projected-5941 deletion completed in 6.185282102s • [SLOW TEST:14.471 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:32:27.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Dec 16 13:32:36.414: INFO: Successfully updated pod "annotationupdate10b5b45b-efe0-4bcf-b0e7-f89f22058f4c" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:32:40.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6055" for this suite. Dec 16 13:33:02.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:33:02.743: INFO: namespace projected-6055 deletion completed in 22.135492493s • [SLOW TEST:35.116 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:33:02.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W1216 13:33:13.092161 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 16 13:33:13.092: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:33:13.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1999" for this suite. Dec 16 13:33:19.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:33:19.452: INFO: namespace gc-1999 deletion completed in 6.354955808s • [SLOW TEST:16.709 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:33:19.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Dec 16 13:33:20.385: INFO: Pod name wrapped-volume-race-a9f996b1-cd9c-48a2-93b1-edeb8c3a9813: Found 0 pods out of 5 Dec 16 13:33:25.414: INFO: Pod name wrapped-volume-race-a9f996b1-cd9c-48a2-93b1-edeb8c3a9813: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-a9f996b1-cd9c-48a2-93b1-edeb8c3a9813 in namespace emptydir-wrapper-6462, will wait for the garbage collector to delete the pods Dec 16 13:33:51.570: INFO: Deleting ReplicationController wrapped-volume-race-a9f996b1-cd9c-48a2-93b1-edeb8c3a9813 took: 10.55293ms Dec 16 13:33:51.971: INFO: Terminating ReplicationController wrapped-volume-race-a9f996b1-cd9c-48a2-93b1-edeb8c3a9813 pods took: 401.193067ms STEP: Creating RC which spawns configmap-volume pods Dec 16 13:34:37.706: INFO: Pod name wrapped-volume-race-979ff3fb-aa1c-4a69-a0e9-b9768f8f557b: Found 0 pods out of 5 Dec 16 13:34:42.717: INFO: Pod name wrapped-volume-race-979ff3fb-aa1c-4a69-a0e9-b9768f8f557b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-979ff3fb-aa1c-4a69-a0e9-b9768f8f557b in namespace emptydir-wrapper-6462, will wait for the garbage collector to delete the pods Dec 16 13:35:12.812: INFO: Deleting ReplicationController wrapped-volume-race-979ff3fb-aa1c-4a69-a0e9-b9768f8f557b took: 13.603686ms Dec 16 13:35:13.313: INFO: Terminating ReplicationController wrapped-volume-race-979ff3fb-aa1c-4a69-a0e9-b9768f8f557b pods took: 501.143504ms STEP: Creating RC which spawns configmap-volume pods Dec 16 13:35:57.218: INFO: Pod name wrapped-volume-race-2a7343cb-932b-4b7b-891d-ca4702e4bfa6: Found 0 pods out of 5 Dec 16 13:36:02.291: INFO: Pod name wrapped-volume-race-2a7343cb-932b-4b7b-891d-ca4702e4bfa6: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-2a7343cb-932b-4b7b-891d-ca4702e4bfa6 in namespace emptydir-wrapper-6462, will wait for the garbage collector to delete the pods Dec 16 13:36:34.416: INFO: Deleting ReplicationController wrapped-volume-race-2a7343cb-932b-4b7b-891d-ca4702e4bfa6 took: 10.295271ms Dec 16 13:36:34.817: INFO: Terminating ReplicationController wrapped-volume-race-2a7343cb-932b-4b7b-891d-ca4702e4bfa6 pods took: 400.898962ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:37:17.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-6462" for this suite. Dec 16 13:37:27.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:37:27.605: INFO: namespace emptydir-wrapper-6462 deletion completed in 10.150220646s • [SLOW TEST:248.152 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:37:27.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-e20ed51a-8dcf-4c3b-a28b-89e9ba5318d0 STEP: Creating a pod to test consume configMaps Dec 16 13:37:27.834: INFO: Waiting up to 5m0s for pod "pod-configmaps-f0f20772-4804-4fc5-a15e-2feea83185ca" in namespace "configmap-2474" to be "success or failure" Dec 16 13:37:27.846: INFO: Pod "pod-configmaps-f0f20772-4804-4fc5-a15e-2feea83185ca": Phase="Pending", Reason="", readiness=false. Elapsed: 11.476565ms Dec 16 13:37:29.859: INFO: Pod "pod-configmaps-f0f20772-4804-4fc5-a15e-2feea83185ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025404678s Dec 16 13:37:31.895: INFO: Pod "pod-configmaps-f0f20772-4804-4fc5-a15e-2feea83185ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060776754s Dec 16 13:37:33.922: INFO: Pod "pod-configmaps-f0f20772-4804-4fc5-a15e-2feea83185ca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088215188s Dec 16 13:37:35.935: INFO: Pod "pod-configmaps-f0f20772-4804-4fc5-a15e-2feea83185ca": Phase="Pending", Reason="", readiness=false. Elapsed: 8.100514372s Dec 16 13:37:38.001: INFO: Pod "pod-configmaps-f0f20772-4804-4fc5-a15e-2feea83185ca": Phase="Pending", Reason="", readiness=false. Elapsed: 10.166777112s Dec 16 13:37:40.013: INFO: Pod "pod-configmaps-f0f20772-4804-4fc5-a15e-2feea83185ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.178600699s STEP: Saw pod success Dec 16 13:37:40.013: INFO: Pod "pod-configmaps-f0f20772-4804-4fc5-a15e-2feea83185ca" satisfied condition "success or failure" Dec 16 13:37:40.018: INFO: Trying to get logs from node iruya-node pod pod-configmaps-f0f20772-4804-4fc5-a15e-2feea83185ca container configmap-volume-test: STEP: delete the pod Dec 16 13:37:40.061: INFO: Waiting for pod pod-configmaps-f0f20772-4804-4fc5-a15e-2feea83185ca to disappear Dec 16 13:37:40.067: INFO: Pod pod-configmaps-f0f20772-4804-4fc5-a15e-2feea83185ca no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:37:40.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2474" for this suite. Dec 16 13:37:46.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:37:46.301: INFO: namespace configmap-2474 deletion completed in 6.224214055s • [SLOW TEST:18.697 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:37:46.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 16 13:37:46.377: INFO: Creating deployment "test-recreate-deployment" Dec 16 13:37:46.382: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Dec 16 13:37:46.405: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Dec 16 13:37:48.428: INFO: Waiting deployment "test-recreate-deployment" to complete Dec 16 13:37:48.434: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712100266, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712100266, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712100266, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712100266, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 16 13:37:50.447: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712100266, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712100266, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712100266, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712100266, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 16 13:37:52.443: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712100266, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712100266, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712100266, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712100266, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 16 13:37:54.444: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Dec 16 13:37:54.459: INFO: Updating deployment test-recreate-deployment Dec 16 13:37:54.460: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Dec 16 13:37:55.071: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-334,SelfLink:/apis/apps/v1/namespaces/deployment-334/deployments/test-recreate-deployment,UID:a92eb0ee-2f67-48ff-86ad-023942d76e2f,ResourceVersion:16889806,Generation:2,CreationTimestamp:2019-12-16 13:37:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2019-12-16 13:37:54 +0000 UTC 2019-12-16 13:37:54 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-12-16 13:37:55 +0000 UTC 2019-12-16 13:37:46 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Dec 16 13:37:55.166: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-334,SelfLink:/apis/apps/v1/namespaces/deployment-334/replicasets/test-recreate-deployment-5c8c9cc69d,UID:eb58c318-deef-4a71-9003-d222a5663167,ResourceVersion:16889802,Generation:1,CreationTimestamp:2019-12-16 13:37:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment a92eb0ee-2f67-48ff-86ad-023942d76e2f 0xc002909b27 0xc002909b28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 16 13:37:55.167: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Dec 16 13:37:55.167: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-334,SelfLink:/apis/apps/v1/namespaces/deployment-334/replicasets/test-recreate-deployment-6df85df6b9,UID:89b9fd63-8b3a-4684-a283-fff68ca41f18,ResourceVersion:16889794,Generation:2,CreationTimestamp:2019-12-16 13:37:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment a92eb0ee-2f67-48ff-86ad-023942d76e2f 0xc002909bf7 0xc002909bf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 16 13:37:55.175: INFO: Pod "test-recreate-deployment-5c8c9cc69d-dj84q" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-dj84q,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-334,SelfLink:/api/v1/namespaces/deployment-334/pods/test-recreate-deployment-5c8c9cc69d-dj84q,UID:64756d7a-1d51-4342-88fb-b95c1bd77d87,ResourceVersion:16889807,Generation:0,CreationTimestamp:2019-12-16 13:37:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d eb58c318-deef-4a71-9003-d222a5663167 0xc001fb64d7 0xc001fb64d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dhtjx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dhtjx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dhtjx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001fb6550} {node.kubernetes.io/unreachable Exists NoExecute 0xc001fb6570}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 13:37:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 13:37:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 13:37:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 13:37:54 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-16 13:37:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:37:55.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-334" for this suite. Dec 16 13:38:01.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:38:01.339: INFO: namespace deployment-334 deletion completed in 6.157872324s • [SLOW TEST:15.037 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:38:01.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:38:11.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-969" for this suite. Dec 16 13:38:51.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:38:51.762: INFO: namespace kubelet-test-969 deletion completed in 40.213771235s • [SLOW TEST:50.423 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:38:51.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-f9286bc7-d081-4766-b368-8b1e139af3f1 STEP: Creating configMap with name cm-test-opt-upd-aae7daa9-5328-4060-856e-44eeda0d4b60 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-f9286bc7-d081-4766-b368-8b1e139af3f1 STEP: Updating configmap cm-test-opt-upd-aae7daa9-5328-4060-856e-44eeda0d4b60 STEP: Creating configMap with name cm-test-opt-create-df80d329-e78e-4f88-9874-f462f1885a88 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:39:06.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-165" for this suite. Dec 16 13:39:28.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:39:28.326: INFO: namespace configmap-165 deletion completed in 22.141748345s • [SLOW TEST:36.563 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:39:28.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:39:28.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2116" for this suite. Dec 16 13:39:50.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:39:50.667: INFO: namespace pods-2116 deletion completed in 22.166378638s • [SLOW TEST:22.341 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:39:50.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-9312 STEP: creating a selector STEP: Creating the service pods in kubernetes Dec 16 13:39:50.821: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Dec 16 13:40:20.958: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9312 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 16 13:40:20.959: INFO: >>> kubeConfig: /root/.kube/config Dec 16 13:40:21.574: INFO: Found all expected endpoints: [netserver-0] Dec 16 13:40:21.582: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9312 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 16 13:40:21.582: INFO: >>> kubeConfig: /root/.kube/config Dec 16 13:40:21.941: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:40:21.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9312" for this suite. Dec 16 13:40:48.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:40:48.093: INFO: namespace pod-network-test-9312 deletion completed in 26.116272089s • [SLOW TEST:57.425 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:40:48.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Dec 16 13:40:48.195: INFO: Waiting up to 5m0s for pod "var-expansion-73f11e53-21e7-43ae-9767-25f2eb9849e0" in namespace "var-expansion-8175" to be "success or failure" Dec 16 13:40:48.207: INFO: Pod "var-expansion-73f11e53-21e7-43ae-9767-25f2eb9849e0": Phase="Pending", Reason="", readiness=false. Elapsed: 11.955031ms Dec 16 13:40:50.218: INFO: Pod "var-expansion-73f11e53-21e7-43ae-9767-25f2eb9849e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022657035s Dec 16 13:40:52.225: INFO: Pod "var-expansion-73f11e53-21e7-43ae-9767-25f2eb9849e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029754031s Dec 16 13:40:54.239: INFO: Pod "var-expansion-73f11e53-21e7-43ae-9767-25f2eb9849e0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044548799s Dec 16 13:40:56.265: INFO: Pod "var-expansion-73f11e53-21e7-43ae-9767-25f2eb9849e0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06999052s Dec 16 13:40:58.273: INFO: Pod "var-expansion-73f11e53-21e7-43ae-9767-25f2eb9849e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.077989618s STEP: Saw pod success Dec 16 13:40:58.273: INFO: Pod "var-expansion-73f11e53-21e7-43ae-9767-25f2eb9849e0" satisfied condition "success or failure" Dec 16 13:40:58.278: INFO: Trying to get logs from node iruya-node pod var-expansion-73f11e53-21e7-43ae-9767-25f2eb9849e0 container dapi-container: STEP: delete the pod Dec 16 13:40:58.342: INFO: Waiting for pod var-expansion-73f11e53-21e7-43ae-9767-25f2eb9849e0 to disappear Dec 16 13:40:58.362: INFO: Pod var-expansion-73f11e53-21e7-43ae-9767-25f2eb9849e0 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:40:58.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8175" for this suite. Dec 16 13:41:04.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:41:04.560: INFO: namespace var-expansion-8175 deletion completed in 6.189863173s • [SLOW TEST:16.467 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:41:04.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-f620d3c2-7c6e-46d4-9611-64edd0ef49f5 STEP: Creating a pod to test consume secrets Dec 16 13:41:04.744: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a49342b1-c4ef-46fe-9d03-075e8c92e6a2" in namespace "projected-2045" to be "success or failure" Dec 16 13:41:04.779: INFO: Pod "pod-projected-secrets-a49342b1-c4ef-46fe-9d03-075e8c92e6a2": Phase="Pending", Reason="", readiness=false. Elapsed: 35.062757ms Dec 16 13:41:06.788: INFO: Pod "pod-projected-secrets-a49342b1-c4ef-46fe-9d03-075e8c92e6a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044369268s Dec 16 13:41:08.795: INFO: Pod "pod-projected-secrets-a49342b1-c4ef-46fe-9d03-075e8c92e6a2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051296715s Dec 16 13:41:10.802: INFO: Pod "pod-projected-secrets-a49342b1-c4ef-46fe-9d03-075e8c92e6a2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058345011s Dec 16 13:41:12.819: INFO: Pod "pod-projected-secrets-a49342b1-c4ef-46fe-9d03-075e8c92e6a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.07556784s STEP: Saw pod success Dec 16 13:41:12.820: INFO: Pod "pod-projected-secrets-a49342b1-c4ef-46fe-9d03-075e8c92e6a2" satisfied condition "success or failure" Dec 16 13:41:12.830: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-a49342b1-c4ef-46fe-9d03-075e8c92e6a2 container projected-secret-volume-test: STEP: delete the pod Dec 16 13:41:12.913: INFO: Waiting for pod pod-projected-secrets-a49342b1-c4ef-46fe-9d03-075e8c92e6a2 to disappear Dec 16 13:41:12.921: INFO: Pod pod-projected-secrets-a49342b1-c4ef-46fe-9d03-075e8c92e6a2 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:41:12.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2045" for this suite. Dec 16 13:41:18.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:41:19.094: INFO: namespace projected-2045 deletion completed in 6.167448021s • [SLOW TEST:14.533 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:41:19.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Dec 16 13:41:26.036: INFO: 0 pods remaining Dec 16 13:41:26.036: INFO: 0 pods has nil DeletionTimestamp Dec 16 13:41:26.036: INFO: STEP: Gathering metrics W1216 13:41:26.950613 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 16 13:41:26.950: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:41:26.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1817" for this suite. Dec 16 13:41:37.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:41:37.252: INFO: namespace gc-1817 deletion completed in 10.295692949s • [SLOW TEST:18.158 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:41:37.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-aa76fdfd-aa0e-4e79-8d8e-bb2ba235dcc5 in namespace container-probe-2150 Dec 16 13:41:45.389: INFO: Started pod liveness-aa76fdfd-aa0e-4e79-8d8e-bb2ba235dcc5 in namespace container-probe-2150 STEP: checking the pod's current state and verifying that restartCount is present Dec 16 13:41:45.396: INFO: Initial restart count of pod liveness-aa76fdfd-aa0e-4e79-8d8e-bb2ba235dcc5 is 0 Dec 16 13:42:07.509: INFO: Restart count of pod container-probe-2150/liveness-aa76fdfd-aa0e-4e79-8d8e-bb2ba235dcc5 is now 1 (22.11270302s elapsed) Dec 16 13:42:25.615: INFO: Restart count of pod container-probe-2150/liveness-aa76fdfd-aa0e-4e79-8d8e-bb2ba235dcc5 is now 2 (40.218198889s elapsed) Dec 16 13:42:48.182: INFO: Restart count of pod container-probe-2150/liveness-aa76fdfd-aa0e-4e79-8d8e-bb2ba235dcc5 is now 3 (1m2.78510991s elapsed) Dec 16 13:43:06.273: INFO: Restart count of pod container-probe-2150/liveness-aa76fdfd-aa0e-4e79-8d8e-bb2ba235dcc5 is now 4 (1m20.876185498s elapsed) Dec 16 13:44:10.661: INFO: Restart count of pod container-probe-2150/liveness-aa76fdfd-aa0e-4e79-8d8e-bb2ba235dcc5 is now 5 (2m25.264219157s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:44:10.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2150" for this suite. Dec 16 13:44:16.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:44:16.994: INFO: namespace container-probe-2150 deletion completed in 6.183454837s • [SLOW TEST:159.742 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:44:16.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Dec 16 13:44:17.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Dec 16 13:44:17.283: INFO: stderr: "" Dec 16 13:44:17.283: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:44:17.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7619" for this suite. Dec 16 13:44:23.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:44:23.461: INFO: namespace kubectl-7619 deletion completed in 6.161524431s • [SLOW TEST:6.467 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:44:23.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Dec 16 13:44:23.613: INFO: Waiting up to 5m0s for pod "var-expansion-5b0e6088-be96-4f09-9afe-f61d72990965" in namespace "var-expansion-6263" to be "success or failure" Dec 16 13:44:23.621: INFO: Pod "var-expansion-5b0e6088-be96-4f09-9afe-f61d72990965": Phase="Pending", Reason="", readiness=false. Elapsed: 6.85034ms Dec 16 13:44:25.628: INFO: Pod "var-expansion-5b0e6088-be96-4f09-9afe-f61d72990965": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013952832s Dec 16 13:44:27.637: INFO: Pod "var-expansion-5b0e6088-be96-4f09-9afe-f61d72990965": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023053574s Dec 16 13:44:29.645: INFO: Pod "var-expansion-5b0e6088-be96-4f09-9afe-f61d72990965": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031237798s Dec 16 13:44:31.653: INFO: Pod "var-expansion-5b0e6088-be96-4f09-9afe-f61d72990965": Phase="Running", Reason="", readiness=true. Elapsed: 8.039076305s Dec 16 13:44:33.669: INFO: Pod "var-expansion-5b0e6088-be96-4f09-9afe-f61d72990965": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.054900272s STEP: Saw pod success Dec 16 13:44:33.669: INFO: Pod "var-expansion-5b0e6088-be96-4f09-9afe-f61d72990965" satisfied condition "success or failure" Dec 16 13:44:33.673: INFO: Trying to get logs from node iruya-node pod var-expansion-5b0e6088-be96-4f09-9afe-f61d72990965 container dapi-container: STEP: delete the pod Dec 16 13:44:33.811: INFO: Waiting for pod var-expansion-5b0e6088-be96-4f09-9afe-f61d72990965 to disappear Dec 16 13:44:33.824: INFO: Pod var-expansion-5b0e6088-be96-4f09-9afe-f61d72990965 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:44:33.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6263" for this suite. Dec 16 13:44:39.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:44:40.133: INFO: namespace var-expansion-6263 deletion completed in 6.303623478s • [SLOW TEST:16.671 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:44:40.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Dec 16 13:47:39.426: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 13:47:39.440: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 13:47:41.440: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 13:47:41.451: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 13:47:43.440: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 13:47:43.449: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 13:47:45.440: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 13:47:45.449: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 13:47:47.440: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 13:47:47.457: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 13:47:49.441: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 13:47:49.457: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 13:47:51.441: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 13:47:51.452: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 13:47:53.440: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 13:47:53.450: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 13:47:55.440: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 13:47:55.453: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 13:47:57.441: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 13:47:57.456: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 13:47:59.440: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 13:47:59.450: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 13:48:01.440: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 13:48:01.452: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:48:01.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5366" for this suite. Dec 16 13:48:23.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:48:23.613: INFO: namespace container-lifecycle-hook-5366 deletion completed in 22.154272394s • [SLOW TEST:223.479 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:48:23.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Dec 16 13:48:31.861: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:48:31.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-712" for this suite. Dec 16 13:48:38.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:48:38.126: INFO: namespace container-runtime-712 deletion completed in 6.195075897s • [SLOW TEST:14.512 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:48:38.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Dec 16 13:48:38.252: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Dec 16 13:48:38.897: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Dec 16 13:48:41.233: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712100918, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712100918, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712100919, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712100918, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 16 13:48:43.244: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712100918, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712100918, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712100919, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712100918, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 16 13:48:45.242: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712100918, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712100918, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712100919, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712100918, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 16 13:48:47.923: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712100918, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712100918, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712100919, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712100918, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 16 13:48:49.243: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712100918, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712100918, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712100919, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712100918, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 16 13:48:54.604: INFO: Waited 3.348025505s for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:48:55.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-318" for this suite. Dec 16 13:49:01.888: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:49:01.999: INFO: namespace aggregator-318 deletion completed in 6.138619521s • [SLOW TEST:23.873 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:49:02.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 16 13:49:02.247: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Dec 16 13:49:02.403: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Dec 16 13:49:10.470: INFO: Creating deployment "test-rolling-update-deployment" Dec 16 13:49:10.485: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Dec 16 13:49:10.497: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Dec 16 13:49:12.517: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Dec 16 13:49:12.520: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712100950, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712100950, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712100950, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712100950, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 16 13:49:14.537: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712100950, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712100950, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712100950, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712100950, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 16 13:49:16.528: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712100950, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712100950, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712100950, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712100950, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 16 13:49:18.540: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Dec 16 13:49:18.559: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-125,SelfLink:/apis/apps/v1/namespaces/deployment-125/deployments/test-rolling-update-deployment,UID:a1a84814-1129-423c-97cd-41d60d541d5c,ResourceVersion:16891316,Generation:1,CreationTimestamp:2019-12-16 13:49:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-16 13:49:10 +0000 UTC 2019-12-16 13:49:10 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-16 13:49:17 +0000 UTC 2019-12-16 13:49:10 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Dec 16 13:49:18.564: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-125,SelfLink:/apis/apps/v1/namespaces/deployment-125/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:499329e3-922a-4dd5-9514-36073e72d922,ResourceVersion:16891305,Generation:1,CreationTimestamp:2019-12-16 13:49:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment a1a84814-1129-423c-97cd-41d60d541d5c 0xc0025198f7 0xc0025198f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Dec 16 13:49:18.564: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Dec 16 13:49:18.565: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-125,SelfLink:/apis/apps/v1/namespaces/deployment-125/replicasets/test-rolling-update-controller,UID:4db05c18-0123-46d1-9e45-c9234a2bb198,ResourceVersion:16891314,Generation:2,CreationTimestamp:2019-12-16 13:49:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment a1a84814-1129-423c-97cd-41d60d541d5c 0xc002519827 0xc002519828}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 16 13:49:18.570: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-vh8wr" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-vh8wr,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-125,SelfLink:/api/v1/namespaces/deployment-125/pods/test-rolling-update-deployment-79f6b9d75c-vh8wr,UID:9d8b1546-0841-4943-a0a5-d3b0c4708b59,ResourceVersion:16891304,Generation:0,CreationTimestamp:2019-12-16 13:49:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 499329e3-922a-4dd5-9514-36073e72d922 0xc00179a6f7 0xc00179a6f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nc4q4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nc4q4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-nc4q4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00179a770} {node.kubernetes.io/unreachable Exists NoExecute 0xc00179a790}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 13:49:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 13:49:17 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 13:49:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 13:49:10 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2019-12-16 13:49:10 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-16 13:49:16 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://7187a759267fd78c19e6b87f1f3156523f371543e28a53631e27908cd1054153}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:49:18.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-125" for this suite. Dec 16 13:49:24.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:49:24.768: INFO: namespace deployment-125 deletion completed in 6.19220649s • [SLOW TEST:22.768 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:49:24.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-2295 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-2295 STEP: Creating statefulset with conflicting port in namespace statefulset-2295 STEP: Waiting until pod test-pod will start running in namespace statefulset-2295 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-2295 Dec 16 13:49:37.004: INFO: Observed stateful pod in namespace: statefulset-2295, name: ss-0, uid: c177739c-477f-47a9-aca9-9ddf3966c255, status phase: Pending. Waiting for statefulset controller to delete. Dec 16 13:49:37.071: INFO: Observed stateful pod in namespace: statefulset-2295, name: ss-0, uid: c177739c-477f-47a9-aca9-9ddf3966c255, status phase: Failed. Waiting for statefulset controller to delete. Dec 16 13:49:37.097: INFO: Observed stateful pod in namespace: statefulset-2295, name: ss-0, uid: c177739c-477f-47a9-aca9-9ddf3966c255, status phase: Failed. Waiting for statefulset controller to delete. Dec 16 13:49:37.109: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-2295 STEP: Removing pod with conflicting port in namespace statefulset-2295 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-2295 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Dec 16 13:49:55.350: INFO: Deleting all statefulset in ns statefulset-2295 Dec 16 13:49:55.357: INFO: Scaling statefulset ss to 0 Dec 16 13:50:05.410: INFO: Waiting for statefulset status.replicas updated to 0 Dec 16 13:50:05.416: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:50:05.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2295" for this suite. Dec 16 13:50:11.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:50:11.703: INFO: namespace statefulset-2295 deletion completed in 6.238618955s • [SLOW TEST:46.933 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:50:11.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Dec 16 13:50:11.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8033' Dec 16 13:50:14.570: INFO: stderr: "" Dec 16 13:50:14.571: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Dec 16 13:50:15.580: INFO: Selector matched 1 pods for map[app:redis] Dec 16 13:50:15.580: INFO: Found 0 / 1 Dec 16 13:50:16.587: INFO: Selector matched 1 pods for map[app:redis] Dec 16 13:50:16.587: INFO: Found 0 / 1 Dec 16 13:50:17.586: INFO: Selector matched 1 pods for map[app:redis] Dec 16 13:50:17.586: INFO: Found 0 / 1 Dec 16 13:50:18.599: INFO: Selector matched 1 pods for map[app:redis] Dec 16 13:50:18.599: INFO: Found 0 / 1 Dec 16 13:50:19.577: INFO: Selector matched 1 pods for map[app:redis] Dec 16 13:50:19.577: INFO: Found 0 / 1 Dec 16 13:50:20.581: INFO: Selector matched 1 pods for map[app:redis] Dec 16 13:50:20.581: INFO: Found 0 / 1 Dec 16 13:50:21.580: INFO: Selector matched 1 pods for map[app:redis] Dec 16 13:50:21.580: INFO: Found 1 / 1 Dec 16 13:50:21.580: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Dec 16 13:50:21.585: INFO: Selector matched 1 pods for map[app:redis] Dec 16 13:50:21.585: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Dec 16 13:50:21.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-nc8xd --namespace=kubectl-8033 -p {"metadata":{"annotations":{"x":"y"}}}' Dec 16 13:50:21.799: INFO: stderr: "" Dec 16 13:50:21.799: INFO: stdout: "pod/redis-master-nc8xd patched\n" STEP: checking annotations Dec 16 13:50:21.814: INFO: Selector matched 1 pods for map[app:redis] Dec 16 13:50:21.814: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:50:21.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8033" for this suite. Dec 16 13:50:43.910: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:50:44.025: INFO: namespace kubectl-8033 deletion completed in 22.146499196s • [SLOW TEST:32.322 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:50:44.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Dec 16 13:50:52.213: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Dec 16 13:51:07.372: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:51:07.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8859" for this suite. Dec 16 13:51:13.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:51:13.571: INFO: namespace pods-8859 deletion completed in 6.17836919s • [SLOW TEST:29.545 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:51:13.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Dec 16 13:51:13.700: INFO: Waiting up to 5m0s for pod "pod-18dbaeac-7812-46d2-a2b4-4241b0129e5e" in namespace "emptydir-1414" to be "success or failure" Dec 16 13:51:13.713: INFO: Pod "pod-18dbaeac-7812-46d2-a2b4-4241b0129e5e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.608787ms Dec 16 13:51:15.720: INFO: Pod "pod-18dbaeac-7812-46d2-a2b4-4241b0129e5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019384087s Dec 16 13:51:17.735: INFO: Pod "pod-18dbaeac-7812-46d2-a2b4-4241b0129e5e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034782465s Dec 16 13:51:19.745: INFO: Pod "pod-18dbaeac-7812-46d2-a2b4-4241b0129e5e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044558805s Dec 16 13:51:21.754: INFO: Pod "pod-18dbaeac-7812-46d2-a2b4-4241b0129e5e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053511352s Dec 16 13:51:23.771: INFO: Pod "pod-18dbaeac-7812-46d2-a2b4-4241b0129e5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.07005373s STEP: Saw pod success Dec 16 13:51:23.771: INFO: Pod "pod-18dbaeac-7812-46d2-a2b4-4241b0129e5e" satisfied condition "success or failure" Dec 16 13:51:23.783: INFO: Trying to get logs from node iruya-node pod pod-18dbaeac-7812-46d2-a2b4-4241b0129e5e container test-container: STEP: delete the pod Dec 16 13:51:23.962: INFO: Waiting for pod pod-18dbaeac-7812-46d2-a2b4-4241b0129e5e to disappear Dec 16 13:51:23.969: INFO: Pod pod-18dbaeac-7812-46d2-a2b4-4241b0129e5e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:51:23.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1414" for this suite. Dec 16 13:51:30.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:51:30.197: INFO: namespace emptydir-1414 deletion completed in 6.212818578s • [SLOW TEST:16.625 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:51:30.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 16 13:51:30.257: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dd2154c9-ea55-498b-9757-911ba7132e7e" in namespace "downward-api-2169" to be "success or failure" Dec 16 13:51:30.276: INFO: Pod "downwardapi-volume-dd2154c9-ea55-498b-9757-911ba7132e7e": Phase="Pending", Reason="", readiness=false. Elapsed: 18.850534ms Dec 16 13:51:32.512: INFO: Pod "downwardapi-volume-dd2154c9-ea55-498b-9757-911ba7132e7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.25478181s Dec 16 13:51:34.524: INFO: Pod "downwardapi-volume-dd2154c9-ea55-498b-9757-911ba7132e7e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.266972812s Dec 16 13:51:36.537: INFO: Pod "downwardapi-volume-dd2154c9-ea55-498b-9757-911ba7132e7e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.280308086s Dec 16 13:51:38.559: INFO: Pod "downwardapi-volume-dd2154c9-ea55-498b-9757-911ba7132e7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.301648148s STEP: Saw pod success Dec 16 13:51:38.559: INFO: Pod "downwardapi-volume-dd2154c9-ea55-498b-9757-911ba7132e7e" satisfied condition "success or failure" Dec 16 13:51:38.574: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-dd2154c9-ea55-498b-9757-911ba7132e7e container client-container: STEP: delete the pod Dec 16 13:51:38.669: INFO: Waiting for pod downwardapi-volume-dd2154c9-ea55-498b-9757-911ba7132e7e to disappear Dec 16 13:51:38.676: INFO: Pod downwardapi-volume-dd2154c9-ea55-498b-9757-911ba7132e7e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:51:38.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2169" for this suite. Dec 16 13:51:44.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:51:44.897: INFO: namespace downward-api-2169 deletion completed in 6.216368326s • [SLOW TEST:14.699 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:51:44.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 16 13:51:45.051: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d156ea54-5709-4a56-8ef8-15c25fb3bf94" in namespace "projected-5211" to be "success or failure" Dec 16 13:51:45.057: INFO: Pod "downwardapi-volume-d156ea54-5709-4a56-8ef8-15c25fb3bf94": Phase="Pending", Reason="", readiness=false. Elapsed: 5.821603ms Dec 16 13:51:47.069: INFO: Pod "downwardapi-volume-d156ea54-5709-4a56-8ef8-15c25fb3bf94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018348024s Dec 16 13:51:49.083: INFO: Pod "downwardapi-volume-d156ea54-5709-4a56-8ef8-15c25fb3bf94": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031591423s Dec 16 13:51:51.088: INFO: Pod "downwardapi-volume-d156ea54-5709-4a56-8ef8-15c25fb3bf94": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036646556s Dec 16 13:51:53.102: INFO: Pod "downwardapi-volume-d156ea54-5709-4a56-8ef8-15c25fb3bf94": Phase="Running", Reason="", readiness=true. Elapsed: 8.051032992s Dec 16 13:51:55.281: INFO: Pod "downwardapi-volume-d156ea54-5709-4a56-8ef8-15c25fb3bf94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.229648895s STEP: Saw pod success Dec 16 13:51:55.281: INFO: Pod "downwardapi-volume-d156ea54-5709-4a56-8ef8-15c25fb3bf94" satisfied condition "success or failure" Dec 16 13:51:55.288: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-d156ea54-5709-4a56-8ef8-15c25fb3bf94 container client-container: STEP: delete the pod Dec 16 13:51:55.459: INFO: Waiting for pod downwardapi-volume-d156ea54-5709-4a56-8ef8-15c25fb3bf94 to disappear Dec 16 13:51:55.472: INFO: Pod downwardapi-volume-d156ea54-5709-4a56-8ef8-15c25fb3bf94 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:51:55.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5211" for this suite. Dec 16 13:52:01.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:52:01.626: INFO: namespace projected-5211 deletion completed in 6.14645818s • [SLOW TEST:16.729 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:52:01.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Dec 16 13:52:01.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6154' Dec 16 13:52:02.325: INFO: stderr: "" Dec 16 13:52:02.325: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 16 13:52:02.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6154' Dec 16 13:52:02.519: INFO: stderr: "" Dec 16 13:52:02.519: INFO: stdout: "update-demo-nautilus-dtv9s update-demo-nautilus-jrj5t " Dec 16 13:52:02.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dtv9s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6154' Dec 16 13:52:02.676: INFO: stderr: "" Dec 16 13:52:02.676: INFO: stdout: "" Dec 16 13:52:02.676: INFO: update-demo-nautilus-dtv9s is created but not running Dec 16 13:52:07.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6154' Dec 16 13:52:08.715: INFO: stderr: "" Dec 16 13:52:08.715: INFO: stdout: "update-demo-nautilus-dtv9s update-demo-nautilus-jrj5t " Dec 16 13:52:08.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dtv9s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6154' Dec 16 13:52:09.745: INFO: stderr: "" Dec 16 13:52:09.745: INFO: stdout: "" Dec 16 13:52:09.746: INFO: update-demo-nautilus-dtv9s is created but not running Dec 16 13:52:14.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6154' Dec 16 13:52:14.963: INFO: stderr: "" Dec 16 13:52:14.963: INFO: stdout: "update-demo-nautilus-dtv9s update-demo-nautilus-jrj5t " Dec 16 13:52:14.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dtv9s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6154' Dec 16 13:52:15.100: INFO: stderr: "" Dec 16 13:52:15.100: INFO: stdout: "true" Dec 16 13:52:15.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dtv9s -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6154' Dec 16 13:52:15.201: INFO: stderr: "" Dec 16 13:52:15.201: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 16 13:52:15.201: INFO: validating pod update-demo-nautilus-dtv9s Dec 16 13:52:15.211: INFO: got data: { "image": "nautilus.jpg" } Dec 16 13:52:15.211: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 16 13:52:15.211: INFO: update-demo-nautilus-dtv9s is verified up and running Dec 16 13:52:15.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jrj5t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6154' Dec 16 13:52:15.305: INFO: stderr: "" Dec 16 13:52:15.305: INFO: stdout: "true" Dec 16 13:52:15.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jrj5t -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6154' Dec 16 13:52:15.422: INFO: stderr: "" Dec 16 13:52:15.422: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 16 13:52:15.422: INFO: validating pod update-demo-nautilus-jrj5t Dec 16 13:52:15.457: INFO: got data: { "image": "nautilus.jpg" } Dec 16 13:52:15.457: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 16 13:52:15.457: INFO: update-demo-nautilus-jrj5t is verified up and running STEP: using delete to clean up resources Dec 16 13:52:15.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6154' Dec 16 13:52:15.621: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 16 13:52:15.621: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Dec 16 13:52:15.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6154' Dec 16 13:52:15.765: INFO: stderr: "No resources found.\n" Dec 16 13:52:15.766: INFO: stdout: "" Dec 16 13:52:15.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6154 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 16 13:52:16.117: INFO: stderr: "" Dec 16 13:52:16.118: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:52:16.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6154" for this suite. Dec 16 13:52:38.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:52:38.277: INFO: namespace kubectl-6154 deletion completed in 22.140611539s • [SLOW TEST:36.651 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:52:38.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-5564 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5564 to expose endpoints map[] Dec 16 13:52:38.554: INFO: Get endpoints failed (58.538899ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Dec 16 13:52:39.561: INFO: successfully validated that service endpoint-test2 in namespace services-5564 exposes endpoints map[] (1.065524273s elapsed) STEP: Creating pod pod1 in namespace services-5564 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5564 to expose endpoints map[pod1:[80]] Dec 16 13:52:43.810: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.231685883s elapsed, will retry) Dec 16 13:52:46.894: INFO: successfully validated that service endpoint-test2 in namespace services-5564 exposes endpoints map[pod1:[80]] (7.315821331s elapsed) STEP: Creating pod pod2 in namespace services-5564 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5564 to expose endpoints map[pod1:[80] pod2:[80]] Dec 16 13:52:51.357: INFO: Unexpected endpoints: found map[f719124b-13b1-46f3-8415-80c595187758:[80]], expected map[pod1:[80] pod2:[80]] (4.456676331s elapsed, will retry) Dec 16 13:52:54.785: INFO: successfully validated that service endpoint-test2 in namespace services-5564 exposes endpoints map[pod1:[80] pod2:[80]] (7.884386777s elapsed) STEP: Deleting pod pod1 in namespace services-5564 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5564 to expose endpoints map[pod2:[80]] Dec 16 13:52:55.843: INFO: successfully validated that service endpoint-test2 in namespace services-5564 exposes endpoints map[pod2:[80]] (1.050112432s elapsed) STEP: Deleting pod pod2 in namespace services-5564 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5564 to expose endpoints map[] Dec 16 13:52:57.946: INFO: successfully validated that service endpoint-test2 in namespace services-5564 exposes endpoints map[] (2.084838642s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:52:58.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5564" for this suite. Dec 16 13:53:04.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:53:04.887: INFO: namespace services-5564 deletion completed in 6.287813282s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:26.607 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:53:04.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Dec 16 13:53:13.687: INFO: Successfully updated pod "annotationupdatebc26b3ab-8e84-420a-a536-b4bf6d9820f0" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:53:15.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7027" for this suite. Dec 16 13:53:37.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:53:38.062: INFO: namespace downward-api-7027 deletion completed in 22.121188929s • [SLOW TEST:33.175 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:53:38.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Dec 16 13:53:46.312: INFO: Pod pod-hostip-2db3b1af-ea35-4244-b8fd-abd2b300d47a has hostIP: 10.96.3.65 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:53:46.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8586" for this suite. Dec 16 13:54:08.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:54:08.550: INFO: namespace pods-8586 deletion completed in 22.233585043s • [SLOW TEST:30.488 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:54:08.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Dec 16 13:54:16.783: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-389acc44-e972-4cc1-aa2b-f3a961480c64,GenerateName:,Namespace:events-8123,SelfLink:/api/v1/namespaces/events-8123/pods/send-events-389acc44-e972-4cc1-aa2b-f3a961480c64,UID:6d02d7bb-065e-4ed2-a8bd-578624fcbd92,ResourceVersion:16892167,Generation:0,CreationTimestamp:2019-12-16 13:54:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 656050673,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kjnn9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kjnn9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-kjnn9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029456b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0029456d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 13:54:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 13:54:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 13:54:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 13:54:08 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2019-12-16 13:54:08 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2019-12-16 13:54:15 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://78fc8b3bdaa848b00773cdd99483600265ad2a5c4f6f8a13d2c08eb08de3c517}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Dec 16 13:54:18.793: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Dec 16 13:54:20.803: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:54:20.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8123" for this suite. Dec 16 13:55:12.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:55:13.052: INFO: namespace events-8123 deletion completed in 52.216965762s • [SLOW TEST:64.502 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:55:13.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-2740 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-2740 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2740 Dec 16 13:55:13.363: INFO: Found 0 stateful pods, waiting for 1 Dec 16 13:55:23.374: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Dec 16 13:55:23.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2740 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 16 13:55:24.327: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 16 13:55:24.327: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 16 13:55:24.327: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 16 13:55:24.337: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Dec 16 13:55:34.352: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 16 13:55:34.352: INFO: Waiting for statefulset status.replicas updated to 0 Dec 16 13:55:34.376: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999998406s Dec 16 13:55:35.389: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.991546321s Dec 16 13:55:36.400: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.978410733s Dec 16 13:55:37.409: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.967205092s Dec 16 13:55:38.455: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.959063375s Dec 16 13:55:39.470: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.913025203s Dec 16 13:55:40.489: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.897455848s Dec 16 13:55:41.520: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.878623382s Dec 16 13:55:42.536: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.847997675s Dec 16 13:55:43.549: INFO: Verifying statefulset ss doesn't scale past 1 for another 831.773206ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2740 Dec 16 13:55:44.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2740 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 16 13:55:45.186: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Dec 16 13:55:45.186: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 16 13:55:45.186: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 16 13:55:45.195: INFO: Found 1 stateful pods, waiting for 3 Dec 16 13:55:55.208: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 16 13:55:55.208: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Dec 16 13:55:55.208: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Dec 16 13:56:05.208: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 16 13:56:05.208: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Dec 16 13:56:05.208: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Dec 16 13:56:05.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2740 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 16 13:56:05.828: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 16 13:56:05.828: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 16 13:56:05.828: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 16 13:56:05.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2740 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 16 13:56:06.246: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 16 13:56:06.247: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 16 13:56:06.247: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 16 13:56:06.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2740 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 16 13:56:06.910: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 16 13:56:06.910: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 16 13:56:06.910: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 16 13:56:06.910: INFO: Waiting for statefulset status.replicas updated to 0 Dec 16 13:56:06.924: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Dec 16 13:56:16.946: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 16 13:56:16.946: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Dec 16 13:56:16.946: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Dec 16 13:56:16.991: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999992643s Dec 16 13:56:18.007: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.990836709s Dec 16 13:56:19.018: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.974718436s Dec 16 13:56:20.026: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.963555242s Dec 16 13:56:21.036: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.95531285s Dec 16 13:56:22.175: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.945924772s Dec 16 13:56:23.186: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.806328609s Dec 16 13:56:24.194: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.795404768s Dec 16 13:56:25.203: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.78775854s Dec 16 13:56:26.213: INFO: Verifying statefulset ss doesn't scale past 3 for another 778.364157ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2740 Dec 16 13:56:27.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2740 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 16 13:56:27.838: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Dec 16 13:56:27.838: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 16 13:56:27.838: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 16 13:56:27.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2740 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 16 13:56:28.207: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Dec 16 13:56:28.208: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 16 13:56:28.208: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 16 13:56:28.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2740 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 16 13:56:28.784: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Dec 16 13:56:28.785: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 16 13:56:28.785: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 16 13:56:28.785: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Dec 16 13:56:48.926: INFO: Deleting all statefulset in ns statefulset-2740 Dec 16 13:56:48.932: INFO: Scaling statefulset ss to 0 Dec 16 13:56:48.954: INFO: Waiting for statefulset status.replicas updated to 0 Dec 16 13:56:48.957: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:56:48.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2740" for this suite. Dec 16 13:56:55.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:56:55.193: INFO: namespace statefulset-2740 deletion completed in 6.203048482s • [SLOW TEST:102.140 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:56:55.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 16 13:56:55.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-6047' Dec 16 13:56:55.431: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 16 13:56:55.431: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Dec 16 13:56:57.456: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-d4xvb] Dec 16 13:56:57.457: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-d4xvb" in namespace "kubectl-6047" to be "running and ready" Dec 16 13:56:57.459: INFO: Pod "e2e-test-nginx-rc-d4xvb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.692616ms Dec 16 13:56:59.467: INFO: Pod "e2e-test-nginx-rc-d4xvb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010633243s Dec 16 13:57:01.479: INFO: Pod "e2e-test-nginx-rc-d4xvb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022089593s Dec 16 13:57:03.513: INFO: Pod "e2e-test-nginx-rc-d4xvb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056774336s Dec 16 13:57:05.522: INFO: Pod "e2e-test-nginx-rc-d4xvb": Phase="Running", Reason="", readiness=true. Elapsed: 8.065604943s Dec 16 13:57:05.522: INFO: Pod "e2e-test-nginx-rc-d4xvb" satisfied condition "running and ready" Dec 16 13:57:05.522: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-d4xvb] Dec 16 13:57:05.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-6047' Dec 16 13:57:05.886: INFO: stderr: "" Dec 16 13:57:05.886: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Dec 16 13:57:05.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-6047' Dec 16 13:57:06.104: INFO: stderr: "" Dec 16 13:57:06.104: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:57:06.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6047" for this suite. Dec 16 13:57:28.137: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:57:28.356: INFO: namespace kubectl-6047 deletion completed in 22.24732138s • [SLOW TEST:33.162 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:57:28.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-7690 I1216 13:57:28.428428 8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-7690, replica count: 1 I1216 13:57:29.479373 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1216 13:57:30.480046 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1216 13:57:31.480632 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1216 13:57:32.481181 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1216 13:57:33.481620 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1216 13:57:34.482122 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1216 13:57:35.482872 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 16 13:57:35.719: INFO: Created: latency-svc-t58ml Dec 16 13:57:35.752: INFO: Got endpoints: latency-svc-t58ml [168.521011ms] Dec 16 13:57:35.908: INFO: Created: latency-svc-tshk6 Dec 16 13:57:35.908: INFO: Got endpoints: latency-svc-tshk6 [153.239921ms] Dec 16 13:57:35.962: INFO: Created: latency-svc-vhn27 Dec 16 13:57:36.059: INFO: Created: latency-svc-zdn9x Dec 16 13:57:36.059: INFO: Got endpoints: latency-svc-vhn27 [305.62746ms] Dec 16 13:57:36.068: INFO: Got endpoints: latency-svc-zdn9x [315.480334ms] Dec 16 13:57:36.139: INFO: Created: latency-svc-6ktbt Dec 16 13:57:36.215: INFO: Got endpoints: latency-svc-6ktbt [461.151171ms] Dec 16 13:57:36.231: INFO: Created: latency-svc-c85hj Dec 16 13:57:36.240: INFO: Got endpoints: latency-svc-c85hj [488.326398ms] Dec 16 13:57:36.278: INFO: Created: latency-svc-dcp9b Dec 16 13:57:36.291: INFO: Got endpoints: latency-svc-dcp9b [537.812398ms] Dec 16 13:57:36.385: INFO: Created: latency-svc-5dff8 Dec 16 13:57:36.408: INFO: Got endpoints: latency-svc-5dff8 [654.860272ms] Dec 16 13:57:36.471: INFO: Created: latency-svc-chbqc Dec 16 13:57:36.575: INFO: Got endpoints: latency-svc-chbqc [821.577126ms] Dec 16 13:57:36.622: INFO: Created: latency-svc-zjzwt Dec 16 13:57:36.667: INFO: Got endpoints: latency-svc-zjzwt [913.395128ms] Dec 16 13:57:36.669: INFO: Created: latency-svc-t2scv Dec 16 13:57:36.743: INFO: Got endpoints: latency-svc-t2scv [989.6498ms] Dec 16 13:57:36.805: INFO: Created: latency-svc-rbjr8 Dec 16 13:57:36.827: INFO: Got endpoints: latency-svc-rbjr8 [1.073104659s] Dec 16 13:57:36.953: INFO: Created: latency-svc-wb7rb Dec 16 13:57:36.955: INFO: Got endpoints: latency-svc-wb7rb [1.20103516s] Dec 16 13:57:36.994: INFO: Created: latency-svc-frdhk Dec 16 13:57:36.999: INFO: Got endpoints: latency-svc-frdhk [1.244931265s] Dec 16 13:57:37.209: INFO: Created: latency-svc-25gvp Dec 16 13:57:37.327: INFO: Created: latency-svc-x6dkn Dec 16 13:57:37.327: INFO: Got endpoints: latency-svc-25gvp [1.574253533s] Dec 16 13:57:37.342: INFO: Got endpoints: latency-svc-x6dkn [1.588401305s] Dec 16 13:57:37.473: INFO: Created: latency-svc-z5cpm Dec 16 13:57:37.515: INFO: Got endpoints: latency-svc-z5cpm [1.607251589s] Dec 16 13:57:37.522: INFO: Created: latency-svc-n86r9 Dec 16 13:57:37.543: INFO: Got endpoints: latency-svc-n86r9 [1.483717866s] Dec 16 13:57:37.643: INFO: Created: latency-svc-rvgk4 Dec 16 13:57:37.648: INFO: Got endpoints: latency-svc-rvgk4 [1.579709454s] Dec 16 13:57:37.689: INFO: Created: latency-svc-zt7nd Dec 16 13:57:37.692: INFO: Got endpoints: latency-svc-zt7nd [1.47731506s] Dec 16 13:57:37.742: INFO: Created: latency-svc-9787n Dec 16 13:57:37.860: INFO: Got endpoints: latency-svc-9787n [1.619517303s] Dec 16 13:57:37.911: INFO: Created: latency-svc-clfxg Dec 16 13:57:37.911: INFO: Got endpoints: latency-svc-clfxg [1.619496448s] Dec 16 13:57:38.014: INFO: Created: latency-svc-cvggs Dec 16 13:57:38.033: INFO: Got endpoints: latency-svc-cvggs [1.624689198s] Dec 16 13:57:38.079: INFO: Created: latency-svc-t6k9n Dec 16 13:57:38.089: INFO: Got endpoints: latency-svc-t6k9n [1.513480234s] Dec 16 13:57:38.175: INFO: Created: latency-svc-wl58z Dec 16 13:57:38.184: INFO: Got endpoints: latency-svc-wl58z [1.516253551s] Dec 16 13:57:38.236: INFO: Created: latency-svc-9cmlx Dec 16 13:57:38.240: INFO: Got endpoints: latency-svc-9cmlx [1.495773468s] Dec 16 13:57:38.340: INFO: Created: latency-svc-4n7mt Dec 16 13:57:38.348: INFO: Got endpoints: latency-svc-4n7mt [1.520754207s] Dec 16 13:57:38.398: INFO: Created: latency-svc-xrsgb Dec 16 13:57:38.405: INFO: Got endpoints: latency-svc-xrsgb [1.449970085s] Dec 16 13:57:38.521: INFO: Created: latency-svc-x9st8 Dec 16 13:57:38.541: INFO: Got endpoints: latency-svc-x9st8 [1.541851449s] Dec 16 13:57:38.598: INFO: Created: latency-svc-7lxxb Dec 16 13:57:38.617: INFO: Got endpoints: latency-svc-7lxxb [1.289852463s] Dec 16 13:57:38.735: INFO: Created: latency-svc-knsn7 Dec 16 13:57:38.778: INFO: Got endpoints: latency-svc-knsn7 [1.435230919s] Dec 16 13:57:38.920: INFO: Created: latency-svc-sp5kq Dec 16 13:57:38.933: INFO: Got endpoints: latency-svc-sp5kq [1.41711986s] Dec 16 13:57:39.098: INFO: Created: latency-svc-x87rv Dec 16 13:57:39.101: INFO: Got endpoints: latency-svc-x87rv [1.557460841s] Dec 16 13:57:39.162: INFO: Created: latency-svc-h2bdh Dec 16 13:57:39.169: INFO: Got endpoints: latency-svc-h2bdh [1.520473967s] Dec 16 13:57:39.317: INFO: Created: latency-svc-f797j Dec 16 13:57:39.381: INFO: Got endpoints: latency-svc-f797j [1.688577893s] Dec 16 13:57:39.389: INFO: Created: latency-svc-8rb5f Dec 16 13:57:39.535: INFO: Got endpoints: latency-svc-8rb5f [1.673520291s] Dec 16 13:57:39.551: INFO: Created: latency-svc-mpm78 Dec 16 13:57:39.552: INFO: Got endpoints: latency-svc-mpm78 [1.640325795s] Dec 16 13:57:39.601: INFO: Created: latency-svc-c9l7d Dec 16 13:57:39.610: INFO: Got endpoints: latency-svc-c9l7d [1.576371202s] Dec 16 13:57:39.740: INFO: Created: latency-svc-tcnf7 Dec 16 13:57:39.911: INFO: Got endpoints: latency-svc-tcnf7 [1.821744968s] Dec 16 13:57:39.914: INFO: Created: latency-svc-c6x85 Dec 16 13:57:39.921: INFO: Got endpoints: latency-svc-c6x85 [1.737271221s] Dec 16 13:57:39.975: INFO: Created: latency-svc-9c2nx Dec 16 13:57:39.988: INFO: Got endpoints: latency-svc-9c2nx [1.747799646s] Dec 16 13:57:40.057: INFO: Created: latency-svc-qtwzf Dec 16 13:57:40.079: INFO: Got endpoints: latency-svc-qtwzf [1.730470137s] Dec 16 13:57:40.131: INFO: Created: latency-svc-d8tgv Dec 16 13:57:40.139: INFO: Got endpoints: latency-svc-d8tgv [150.980435ms] Dec 16 13:57:40.247: INFO: Created: latency-svc-tz6hw Dec 16 13:57:40.253: INFO: Got endpoints: latency-svc-tz6hw [1.847259513s] Dec 16 13:57:40.305: INFO: Created: latency-svc-kfc9l Dec 16 13:57:40.307: INFO: Got endpoints: latency-svc-kfc9l [1.765634751s] Dec 16 13:57:40.394: INFO: Created: latency-svc-cg22h Dec 16 13:57:40.408: INFO: Got endpoints: latency-svc-cg22h [1.790171508s] Dec 16 13:57:40.460: INFO: Created: latency-svc-hgklv Dec 16 13:57:40.471: INFO: Got endpoints: latency-svc-hgklv [1.692739634s] Dec 16 13:57:40.608: INFO: Created: latency-svc-985g5 Dec 16 13:57:40.619: INFO: Got endpoints: latency-svc-985g5 [1.685281726s] Dec 16 13:57:40.718: INFO: Created: latency-svc-g4q94 Dec 16 13:57:40.727: INFO: Got endpoints: latency-svc-g4q94 [1.625611533s] Dec 16 13:57:40.783: INFO: Created: latency-svc-gj8gv Dec 16 13:57:40.894: INFO: Got endpoints: latency-svc-gj8gv [1.72489691s] Dec 16 13:57:41.668: INFO: Created: latency-svc-47bgm Dec 16 13:57:41.675: INFO: Got endpoints: latency-svc-47bgm [2.293130039s] Dec 16 13:57:41.741: INFO: Created: latency-svc-qhmgn Dec 16 13:57:41.874: INFO: Got endpoints: latency-svc-qhmgn [2.338900532s] Dec 16 13:57:41.925: INFO: Created: latency-svc-dqrz7 Dec 16 13:57:41.933: INFO: Got endpoints: latency-svc-dqrz7 [2.381645685s] Dec 16 13:57:42.108: INFO: Created: latency-svc-dh6zf Dec 16 13:57:42.127: INFO: Got endpoints: latency-svc-dh6zf [2.517061926s] Dec 16 13:57:42.245: INFO: Created: latency-svc-xmj87 Dec 16 13:57:42.255: INFO: Got endpoints: latency-svc-xmj87 [2.343307929s] Dec 16 13:57:42.320: INFO: Created: latency-svc-wjgjg Dec 16 13:57:42.321: INFO: Got endpoints: latency-svc-wjgjg [2.399281407s] Dec 16 13:57:42.525: INFO: Created: latency-svc-gk52n Dec 16 13:57:42.544: INFO: Got endpoints: latency-svc-gk52n [2.464702105s] Dec 16 13:57:42.703: INFO: Created: latency-svc-6j4mv Dec 16 13:57:42.725: INFO: Got endpoints: latency-svc-6j4mv [2.585523125s] Dec 16 13:57:42.786: INFO: Created: latency-svc-bg59r Dec 16 13:57:42.904: INFO: Got endpoints: latency-svc-bg59r [2.650458494s] Dec 16 13:57:42.923: INFO: Created: latency-svc-c9zzn Dec 16 13:57:42.932: INFO: Got endpoints: latency-svc-c9zzn [2.624169815s] Dec 16 13:57:43.000: INFO: Created: latency-svc-r7hf9 Dec 16 13:57:43.047: INFO: Got endpoints: latency-svc-r7hf9 [2.638806049s] Dec 16 13:57:43.104: INFO: Created: latency-svc-w94qw Dec 16 13:57:43.117: INFO: Got endpoints: latency-svc-w94qw [2.645540018s] Dec 16 13:57:43.354: INFO: Created: latency-svc-thl7v Dec 16 13:57:43.374: INFO: Got endpoints: latency-svc-thl7v [2.755091345s] Dec 16 13:57:43.421: INFO: Created: latency-svc-v9sds Dec 16 13:57:43.443: INFO: Got endpoints: latency-svc-v9sds [2.715467633s] Dec 16 13:57:43.520: INFO: Created: latency-svc-7mfcm Dec 16 13:57:43.530: INFO: Got endpoints: latency-svc-7mfcm [2.636118923s] Dec 16 13:57:43.572: INFO: Created: latency-svc-2k2fm Dec 16 13:57:43.620: INFO: Got endpoints: latency-svc-2k2fm [1.944837152s] Dec 16 13:57:43.685: INFO: Created: latency-svc-gtqx6 Dec 16 13:57:43.688: INFO: Got endpoints: latency-svc-gtqx6 [1.813077792s] Dec 16 13:57:43.738: INFO: Created: latency-svc-l6mbp Dec 16 13:57:43.744: INFO: Got endpoints: latency-svc-l6mbp [1.809839418s] Dec 16 13:57:43.876: INFO: Created: latency-svc-bk2jq Dec 16 13:57:43.948: INFO: Created: latency-svc-4gphl Dec 16 13:57:43.949: INFO: Got endpoints: latency-svc-bk2jq [1.821045346s] Dec 16 13:57:44.083: INFO: Created: latency-svc-j4tbm Dec 16 13:57:44.083: INFO: Got endpoints: latency-svc-4gphl [1.828222896s] Dec 16 13:57:44.141: INFO: Got endpoints: latency-svc-j4tbm [1.820172126s] Dec 16 13:57:44.150: INFO: Created: latency-svc-g7688 Dec 16 13:57:44.256: INFO: Got endpoints: latency-svc-g7688 [1.711874601s] Dec 16 13:57:44.260: INFO: Created: latency-svc-nxgpr Dec 16 13:57:44.276: INFO: Got endpoints: latency-svc-nxgpr [1.550611625s] Dec 16 13:57:44.317: INFO: Created: latency-svc-j5x9c Dec 16 13:57:44.455: INFO: Got endpoints: latency-svc-j5x9c [1.55082472s] Dec 16 13:57:44.461: INFO: Created: latency-svc-kctr9 Dec 16 13:57:44.464: INFO: Got endpoints: latency-svc-kctr9 [1.532283594s] Dec 16 13:57:44.621: INFO: Created: latency-svc-b5rwk Dec 16 13:57:44.625: INFO: Got endpoints: latency-svc-b5rwk [1.577727388s] Dec 16 13:57:44.676: INFO: Created: latency-svc-dw94s Dec 16 13:57:44.680: INFO: Got endpoints: latency-svc-dw94s [1.562620116s] Dec 16 13:57:44.791: INFO: Created: latency-svc-nnf65 Dec 16 13:57:44.818: INFO: Got endpoints: latency-svc-nnf65 [1.44381803s] Dec 16 13:57:44.940: INFO: Created: latency-svc-pcqbx Dec 16 13:57:44.946: INFO: Got endpoints: latency-svc-pcqbx [1.503006612s] Dec 16 13:57:45.016: INFO: Created: latency-svc-vkwxn Dec 16 13:57:45.020: INFO: Got endpoints: latency-svc-vkwxn [1.489430078s] Dec 16 13:57:45.094: INFO: Created: latency-svc-vf2w6 Dec 16 13:57:45.100: INFO: Got endpoints: latency-svc-vf2w6 [1.479368666s] Dec 16 13:57:45.145: INFO: Created: latency-svc-5vdjd Dec 16 13:57:45.168: INFO: Got endpoints: latency-svc-5vdjd [1.479769175s] Dec 16 13:57:45.368: INFO: Created: latency-svc-qpn9l Dec 16 13:57:45.376: INFO: Got endpoints: latency-svc-qpn9l [1.632027088s] Dec 16 13:57:45.415: INFO: Created: latency-svc-x5lmt Dec 16 13:57:45.493: INFO: Got endpoints: latency-svc-x5lmt [1.544277757s] Dec 16 13:57:45.510: INFO: Created: latency-svc-v2vg2 Dec 16 13:57:45.518: INFO: Got endpoints: latency-svc-v2vg2 [1.434130501s] Dec 16 13:57:45.573: INFO: Created: latency-svc-frzh8 Dec 16 13:57:45.581: INFO: Got endpoints: latency-svc-frzh8 [1.439682447s] Dec 16 13:57:45.657: INFO: Created: latency-svc-vzz6j Dec 16 13:57:45.679: INFO: Got endpoints: latency-svc-vzz6j [1.421861057s] Dec 16 13:57:45.717: INFO: Created: latency-svc-wkwxm Dec 16 13:57:45.727: INFO: Got endpoints: latency-svc-wkwxm [1.450181365s] Dec 16 13:57:45.849: INFO: Created: latency-svc-jnldf Dec 16 13:57:45.906: INFO: Got endpoints: latency-svc-jnldf [1.449985119s] Dec 16 13:57:45.931: INFO: Created: latency-svc-ckwxz Dec 16 13:57:46.393: INFO: Got endpoints: latency-svc-ckwxz [1.928201162s] Dec 16 13:57:46.645: INFO: Created: latency-svc-zmrrp Dec 16 13:57:46.665: INFO: Got endpoints: latency-svc-zmrrp [2.039955183s] Dec 16 13:57:46.729: INFO: Created: latency-svc-vkgsk Dec 16 13:57:46.863: INFO: Got endpoints: latency-svc-vkgsk [2.182506478s] Dec 16 13:57:46.873: INFO: Created: latency-svc-xgc46 Dec 16 13:57:46.873: INFO: Got endpoints: latency-svc-xgc46 [2.054592759s] Dec 16 13:57:47.000: INFO: Created: latency-svc-6f2zg Dec 16 13:57:47.031: INFO: Got endpoints: latency-svc-6f2zg [2.08536973s] Dec 16 13:57:47.256: INFO: Created: latency-svc-shfkv Dec 16 13:57:47.260: INFO: Got endpoints: latency-svc-shfkv [2.239851629s] Dec 16 13:57:47.326: INFO: Created: latency-svc-phmgd Dec 16 13:57:47.461: INFO: Got endpoints: latency-svc-phmgd [2.360810501s] Dec 16 13:57:47.495: INFO: Created: latency-svc-m8cjc Dec 16 13:57:47.505: INFO: Got endpoints: latency-svc-m8cjc [2.336335238s] Dec 16 13:57:47.678: INFO: Created: latency-svc-gpmwk Dec 16 13:57:47.678: INFO: Got endpoints: latency-svc-gpmwk [2.301860624s] Dec 16 13:57:47.722: INFO: Created: latency-svc-562r2 Dec 16 13:57:47.737: INFO: Got endpoints: latency-svc-562r2 [2.242881934s] Dec 16 13:57:47.914: INFO: Created: latency-svc-dn4jk Dec 16 13:57:47.931: INFO: Got endpoints: latency-svc-dn4jk [2.413129193s] Dec 16 13:57:47.973: INFO: Created: latency-svc-fkz7b Dec 16 13:57:47.992: INFO: Got endpoints: latency-svc-fkz7b [2.410672407s] Dec 16 13:57:48.092: INFO: Created: latency-svc-r7qr5 Dec 16 13:57:48.102: INFO: Got endpoints: latency-svc-r7qr5 [2.422713499s] Dec 16 13:57:48.161: INFO: Created: latency-svc-w4kqf Dec 16 13:57:48.165: INFO: Got endpoints: latency-svc-w4kqf [2.437827041s] Dec 16 13:57:48.271: INFO: Created: latency-svc-g94zd Dec 16 13:57:48.288: INFO: Got endpoints: latency-svc-g94zd [2.381075932s] Dec 16 13:57:48.328: INFO: Created: latency-svc-8r6fp Dec 16 13:57:48.336: INFO: Got endpoints: latency-svc-8r6fp [1.942427388s] Dec 16 13:57:48.453: INFO: Created: latency-svc-7k5l6 Dec 16 13:57:48.465: INFO: Got endpoints: latency-svc-7k5l6 [1.798965908s] Dec 16 13:57:48.537: INFO: Created: latency-svc-ztbdr Dec 16 13:57:48.609: INFO: Got endpoints: latency-svc-ztbdr [1.74596064s] Dec 16 13:57:48.678: INFO: Created: latency-svc-ddkdk Dec 16 13:57:48.694: INFO: Got endpoints: latency-svc-ddkdk [1.820204922s] Dec 16 13:57:48.809: INFO: Created: latency-svc-6dgrw Dec 16 13:57:48.827: INFO: Got endpoints: latency-svc-6dgrw [1.795346367s] Dec 16 13:57:49.053: INFO: Created: latency-svc-tgrl5 Dec 16 13:57:49.059: INFO: Got endpoints: latency-svc-tgrl5 [1.798943188s] Dec 16 13:57:49.232: INFO: Created: latency-svc-zp6gp Dec 16 13:57:49.253: INFO: Got endpoints: latency-svc-zp6gp [1.791788788s] Dec 16 13:57:49.332: INFO: Created: latency-svc-8jmxp Dec 16 13:57:49.468: INFO: Got endpoints: latency-svc-8jmxp [1.962052771s] Dec 16 13:57:49.506: INFO: Created: latency-svc-n92mq Dec 16 13:57:49.513: INFO: Got endpoints: latency-svc-n92mq [1.834454237s] Dec 16 13:57:49.662: INFO: Created: latency-svc-f4wpq Dec 16 13:57:49.663: INFO: Got endpoints: latency-svc-f4wpq [1.926430732s] Dec 16 13:57:49.722: INFO: Created: latency-svc-zkxbt Dec 16 13:57:49.733: INFO: Got endpoints: latency-svc-zkxbt [1.801269272s] Dec 16 13:57:49.954: INFO: Created: latency-svc-mmnhs Dec 16 13:57:49.961: INFO: Got endpoints: latency-svc-mmnhs [1.967964617s] Dec 16 13:57:50.009: INFO: Created: latency-svc-2675x Dec 16 13:57:50.111: INFO: Got endpoints: latency-svc-2675x [2.008902536s] Dec 16 13:57:50.130: INFO: Created: latency-svc-4dbl7 Dec 16 13:57:50.149: INFO: Got endpoints: latency-svc-4dbl7 [1.983676256s] Dec 16 13:57:50.223: INFO: Created: latency-svc-82tqk Dec 16 13:57:50.318: INFO: Got endpoints: latency-svc-82tqk [2.029601958s] Dec 16 13:57:50.362: INFO: Created: latency-svc-xdgh2 Dec 16 13:57:50.362: INFO: Got endpoints: latency-svc-xdgh2 [2.025853207s] Dec 16 13:57:50.503: INFO: Created: latency-svc-m5g7l Dec 16 13:57:50.513: INFO: Got endpoints: latency-svc-m5g7l [2.048443498s] Dec 16 13:57:50.575: INFO: Created: latency-svc-kqjkm Dec 16 13:57:50.827: INFO: Got endpoints: latency-svc-kqjkm [2.217315107s] Dec 16 13:57:51.095: INFO: Created: latency-svc-ltlq7 Dec 16 13:57:51.111: INFO: Got endpoints: latency-svc-ltlq7 [2.416651561s] Dec 16 13:57:51.195: INFO: Created: latency-svc-jn68j Dec 16 13:57:51.385: INFO: Got endpoints: latency-svc-jn68j [2.556999693s] Dec 16 13:57:51.431: INFO: Created: latency-svc-l72sc Dec 16 13:57:51.457: INFO: Got endpoints: latency-svc-l72sc [2.397550275s] Dec 16 13:57:51.637: INFO: Created: latency-svc-89xvl Dec 16 13:57:51.659: INFO: Got endpoints: latency-svc-89xvl [2.40501333s] Dec 16 13:57:51.719: INFO: Created: latency-svc-2vc6p Dec 16 13:57:51.720: INFO: Got endpoints: latency-svc-2vc6p [2.251573506s] Dec 16 13:57:51.848: INFO: Created: latency-svc-w99wj Dec 16 13:57:51.920: INFO: Got endpoints: latency-svc-w99wj [2.406750483s] Dec 16 13:57:51.938: INFO: Created: latency-svc-d5gz9 Dec 16 13:57:52.041: INFO: Got endpoints: latency-svc-d5gz9 [2.377175862s] Dec 16 13:57:52.070: INFO: Created: latency-svc-rqlh6 Dec 16 13:57:52.098: INFO: Got endpoints: latency-svc-rqlh6 [2.364852483s] Dec 16 13:57:52.432: INFO: Created: latency-svc-gxvqx Dec 16 13:57:52.500: INFO: Got endpoints: latency-svc-gxvqx [2.539319607s] Dec 16 13:57:52.509: INFO: Created: latency-svc-zq88p Dec 16 13:57:52.537: INFO: Got endpoints: latency-svc-zq88p [2.425390707s] Dec 16 13:57:52.708: INFO: Created: latency-svc-7bt67 Dec 16 13:57:52.712: INFO: Got endpoints: latency-svc-7bt67 [2.562755674s] Dec 16 13:57:52.776: INFO: Created: latency-svc-7d92z Dec 16 13:57:52.782: INFO: Got endpoints: latency-svc-7d92z [2.463872617s] Dec 16 13:57:52.973: INFO: Created: latency-svc-kmtqj Dec 16 13:57:52.988: INFO: Got endpoints: latency-svc-kmtqj [2.625544457s] Dec 16 13:57:53.051: INFO: Created: latency-svc-nxkp9 Dec 16 13:57:53.149: INFO: Got endpoints: latency-svc-nxkp9 [2.634994823s] Dec 16 13:57:53.162: INFO: Created: latency-svc-knc6q Dec 16 13:57:53.208: INFO: Got endpoints: latency-svc-knc6q [2.380032892s] Dec 16 13:57:53.213: INFO: Created: latency-svc-99pgn Dec 16 13:57:53.224: INFO: Got endpoints: latency-svc-99pgn [2.113236181s] Dec 16 13:57:53.342: INFO: Created: latency-svc-m62g2 Dec 16 13:57:53.353: INFO: Got endpoints: latency-svc-m62g2 [1.967919315s] Dec 16 13:57:53.411: INFO: Created: latency-svc-j7sbh Dec 16 13:57:53.415: INFO: Got endpoints: latency-svc-j7sbh [1.957462562s] Dec 16 13:57:53.507: INFO: Created: latency-svc-pfwwz Dec 16 13:57:53.520: INFO: Got endpoints: latency-svc-pfwwz [1.860730246s] Dec 16 13:57:53.564: INFO: Created: latency-svc-hrxsf Dec 16 13:57:53.565: INFO: Got endpoints: latency-svc-hrxsf [1.845194525s] Dec 16 13:57:53.719: INFO: Created: latency-svc-dpgvm Dec 16 13:57:53.721: INFO: Got endpoints: latency-svc-dpgvm [1.800621952s] Dec 16 13:57:53.793: INFO: Created: latency-svc-rt7cb Dec 16 13:57:53.797: INFO: Got endpoints: latency-svc-rt7cb [1.755488745s] Dec 16 13:57:53.982: INFO: Created: latency-svc-l4dh2 Dec 16 13:57:54.038: INFO: Created: latency-svc-bc5wj Dec 16 13:57:54.039: INFO: Got endpoints: latency-svc-l4dh2 [1.940612888s] Dec 16 13:57:54.200: INFO: Got endpoints: latency-svc-bc5wj [1.698638466s] Dec 16 13:57:54.201: INFO: Created: latency-svc-4rsqq Dec 16 13:57:54.307: INFO: Created: latency-svc-pp4z2 Dec 16 13:57:54.401: INFO: Got endpoints: latency-svc-4rsqq [1.863832424s] Dec 16 13:57:54.412: INFO: Got endpoints: latency-svc-pp4z2 [1.699677839s] Dec 16 13:57:54.414: INFO: Created: latency-svc-mk8xr Dec 16 13:57:54.437: INFO: Got endpoints: latency-svc-mk8xr [1.654350365s] Dec 16 13:57:54.493: INFO: Created: latency-svc-gvzth Dec 16 13:57:54.494: INFO: Got endpoints: latency-svc-gvzth [1.506311394s] Dec 16 13:57:54.645: INFO: Created: latency-svc-r7wrl Dec 16 13:57:54.661: INFO: Got endpoints: latency-svc-r7wrl [1.512319598s] Dec 16 13:57:54.719: INFO: Created: latency-svc-7wjqp Dec 16 13:57:54.825: INFO: Got endpoints: latency-svc-7wjqp [1.617176454s] Dec 16 13:57:54.845: INFO: Created: latency-svc-cfd5t Dec 16 13:57:54.845: INFO: Got endpoints: latency-svc-cfd5t [1.620899279s] Dec 16 13:57:54.906: INFO: Created: latency-svc-s9gcj Dec 16 13:57:55.048: INFO: Got endpoints: latency-svc-s9gcj [1.694905629s] Dec 16 13:57:55.086: INFO: Created: latency-svc-lxc8g Dec 16 13:57:55.099: INFO: Got endpoints: latency-svc-lxc8g [1.684167393s] Dec 16 13:57:55.218: INFO: Created: latency-svc-t28vk Dec 16 13:57:55.252: INFO: Got endpoints: latency-svc-t28vk [1.732056167s] Dec 16 13:57:55.304: INFO: Created: latency-svc-jtlbj Dec 16 13:57:55.369: INFO: Got endpoints: latency-svc-jtlbj [1.8039998s] Dec 16 13:57:55.408: INFO: Created: latency-svc-sgdvm Dec 16 13:57:55.412: INFO: Got endpoints: latency-svc-sgdvm [1.691000739s] Dec 16 13:57:55.469: INFO: Created: latency-svc-f7mq4 Dec 16 13:57:55.562: INFO: Created: latency-svc-ph77s Dec 16 13:57:55.565: INFO: Got endpoints: latency-svc-f7mq4 [1.767619261s] Dec 16 13:57:55.571: INFO: Got endpoints: latency-svc-ph77s [1.532282372s] Dec 16 13:57:55.614: INFO: Created: latency-svc-r97v6 Dec 16 13:57:55.802: INFO: Got endpoints: latency-svc-r97v6 [1.601687644s] Dec 16 13:57:55.865: INFO: Created: latency-svc-bsqdw Dec 16 13:57:55.881: INFO: Got endpoints: latency-svc-bsqdw [1.479225878s] Dec 16 13:57:56.064: INFO: Created: latency-svc-mkfcw Dec 16 13:57:56.091: INFO: Got endpoints: latency-svc-mkfcw [1.679034144s] Dec 16 13:57:56.297: INFO: Created: latency-svc-d7psb Dec 16 13:57:56.312: INFO: Got endpoints: latency-svc-d7psb [1.875105271s] Dec 16 13:57:56.379: INFO: Created: latency-svc-7tb7p Dec 16 13:57:56.500: INFO: Got endpoints: latency-svc-7tb7p [2.005515265s] Dec 16 13:57:56.554: INFO: Created: latency-svc-4d4gk Dec 16 13:57:56.566: INFO: Got endpoints: latency-svc-4d4gk [1.903651367s] Dec 16 13:57:56.680: INFO: Created: latency-svc-nrlcr Dec 16 13:57:56.688: INFO: Got endpoints: latency-svc-nrlcr [1.862728403s] Dec 16 13:57:56.747: INFO: Created: latency-svc-wsrd6 Dec 16 13:57:56.747: INFO: Got endpoints: latency-svc-wsrd6 [1.901564553s] Dec 16 13:57:56.830: INFO: Created: latency-svc-xd79k Dec 16 13:57:56.879: INFO: Got endpoints: latency-svc-xd79k [1.83054054s] Dec 16 13:57:56.890: INFO: Created: latency-svc-dj2zm Dec 16 13:57:56.895: INFO: Got endpoints: latency-svc-dj2zm [1.795030933s] Dec 16 13:57:57.038: INFO: Created: latency-svc-4jj82 Dec 16 13:57:57.068: INFO: Got endpoints: latency-svc-4jj82 [1.815216093s] Dec 16 13:57:57.138: INFO: Created: latency-svc-gk8ft Dec 16 13:57:57.213: INFO: Got endpoints: latency-svc-gk8ft [1.842886922s] Dec 16 13:57:57.227: INFO: Created: latency-svc-9tf6s Dec 16 13:57:57.233: INFO: Got endpoints: latency-svc-9tf6s [1.82099793s] Dec 16 13:57:57.446: INFO: Created: latency-svc-kv8bc Dec 16 13:57:57.467: INFO: Got endpoints: latency-svc-kv8bc [1.901756504s] Dec 16 13:57:57.565: INFO: Created: latency-svc-ck9c4 Dec 16 13:57:57.569: INFO: Got endpoints: latency-svc-ck9c4 [1.998034364s] Dec 16 13:57:57.638: INFO: Created: latency-svc-56vtz Dec 16 13:57:57.642: INFO: Got endpoints: latency-svc-56vtz [1.839712203s] Dec 16 13:57:57.761: INFO: Created: latency-svc-s9wvf Dec 16 13:57:57.764: INFO: Got endpoints: latency-svc-s9wvf [1.882542413s] Dec 16 13:57:57.826: INFO: Created: latency-svc-2q8dp Dec 16 13:57:57.917: INFO: Got endpoints: latency-svc-2q8dp [1.825391567s] Dec 16 13:57:57.975: INFO: Created: latency-svc-lw7zn Dec 16 13:57:57.983: INFO: Got endpoints: latency-svc-lw7zn [1.670429547s] Dec 16 13:57:58.072: INFO: Created: latency-svc-fjj9q Dec 16 13:57:58.105: INFO: Got endpoints: latency-svc-fjj9q [1.604170547s] Dec 16 13:57:58.116: INFO: Created: latency-svc-5dcmv Dec 16 13:57:58.122: INFO: Got endpoints: latency-svc-5dcmv [1.555857402s] Dec 16 13:57:58.156: INFO: Created: latency-svc-b5txw Dec 16 13:57:58.243: INFO: Got endpoints: latency-svc-b5txw [1.554834254s] Dec 16 13:57:58.269: INFO: Created: latency-svc-zswgp Dec 16 13:57:58.286: INFO: Got endpoints: latency-svc-zswgp [1.53892332s] Dec 16 13:57:58.326: INFO: Created: latency-svc-98lw4 Dec 16 13:57:58.334: INFO: Got endpoints: latency-svc-98lw4 [1.454654051s] Dec 16 13:57:58.481: INFO: Created: latency-svc-lvp98 Dec 16 13:57:58.501: INFO: Got endpoints: latency-svc-lvp98 [1.60661324s] Dec 16 13:57:58.608: INFO: Created: latency-svc-cgpkx Dec 16 13:57:58.611: INFO: Got endpoints: latency-svc-cgpkx [1.541623632s] Dec 16 13:57:58.671: INFO: Created: latency-svc-f577x Dec 16 13:57:58.689: INFO: Got endpoints: latency-svc-f577x [1.475393322s] Dec 16 13:57:58.817: INFO: Created: latency-svc-wv97m Dec 16 13:57:58.826: INFO: Got endpoints: latency-svc-wv97m [1.592907506s] Dec 16 13:57:58.995: INFO: Created: latency-svc-gzw9v Dec 16 13:57:59.049: INFO: Got endpoints: latency-svc-gzw9v [1.582029177s] Dec 16 13:57:59.050: INFO: Created: latency-svc-rrt6t Dec 16 13:57:59.060: INFO: Got endpoints: latency-svc-rrt6t [1.490557501s] Dec 16 13:57:59.167: INFO: Created: latency-svc-j72bf Dec 16 13:57:59.169: INFO: Got endpoints: latency-svc-j72bf [1.527182528s] Dec 16 13:57:59.382: INFO: Created: latency-svc-svw69 Dec 16 13:57:59.414: INFO: Got endpoints: latency-svc-svw69 [1.649974369s] Dec 16 13:57:59.480: INFO: Created: latency-svc-plfj6 Dec 16 13:57:59.552: INFO: Got endpoints: latency-svc-plfj6 [1.63423721s] Dec 16 13:57:59.592: INFO: Created: latency-svc-v2zxl Dec 16 13:57:59.597: INFO: Got endpoints: latency-svc-v2zxl [1.613545652s] Dec 16 13:57:59.641: INFO: Created: latency-svc-zsmp6 Dec 16 13:57:59.643: INFO: Got endpoints: latency-svc-zsmp6 [1.538330555s] Dec 16 13:57:59.770: INFO: Created: latency-svc-9vp4r Dec 16 13:57:59.809: INFO: Got endpoints: latency-svc-9vp4r [1.686226671s] Dec 16 13:57:59.959: INFO: Created: latency-svc-prhbw Dec 16 13:57:59.995: INFO: Got endpoints: latency-svc-prhbw [1.750640334s] Dec 16 13:58:00.080: INFO: Created: latency-svc-6lgpr Dec 16 13:58:00.099: INFO: Got endpoints: latency-svc-6lgpr [1.812337438s] Dec 16 13:58:00.133: INFO: Created: latency-svc-pm4nq Dec 16 13:58:00.146: INFO: Got endpoints: latency-svc-pm4nq [1.811889533s] Dec 16 13:58:00.238: INFO: Created: latency-svc-r9l5v Dec 16 13:58:00.244: INFO: Got endpoints: latency-svc-r9l5v [1.74257493s] Dec 16 13:58:00.321: INFO: Created: latency-svc-7h9bz Dec 16 13:58:00.327: INFO: Got endpoints: latency-svc-7h9bz [1.716181153s] Dec 16 13:58:00.327: INFO: Latencies: [150.980435ms 153.239921ms 305.62746ms 315.480334ms 461.151171ms 488.326398ms 537.812398ms 654.860272ms 821.577126ms 913.395128ms 989.6498ms 1.073104659s 1.20103516s 1.244931265s 1.289852463s 1.41711986s 1.421861057s 1.434130501s 1.435230919s 1.439682447s 1.44381803s 1.449970085s 1.449985119s 1.450181365s 1.454654051s 1.475393322s 1.47731506s 1.479225878s 1.479368666s 1.479769175s 1.483717866s 1.489430078s 1.490557501s 1.495773468s 1.503006612s 1.506311394s 1.512319598s 1.513480234s 1.516253551s 1.520473967s 1.520754207s 1.527182528s 1.532282372s 1.532283594s 1.538330555s 1.53892332s 1.541623632s 1.541851449s 1.544277757s 1.550611625s 1.55082472s 1.554834254s 1.555857402s 1.557460841s 1.562620116s 1.574253533s 1.576371202s 1.577727388s 1.579709454s 1.582029177s 1.588401305s 1.592907506s 1.601687644s 1.604170547s 1.60661324s 1.607251589s 1.613545652s 1.617176454s 1.619496448s 1.619517303s 1.620899279s 1.624689198s 1.625611533s 1.632027088s 1.63423721s 1.640325795s 1.649974369s 1.654350365s 1.670429547s 1.673520291s 1.679034144s 1.684167393s 1.685281726s 1.686226671s 1.688577893s 1.691000739s 1.692739634s 1.694905629s 1.698638466s 1.699677839s 1.711874601s 1.716181153s 1.72489691s 1.730470137s 1.732056167s 1.737271221s 1.74257493s 1.74596064s 1.747799646s 1.750640334s 1.755488745s 1.765634751s 1.767619261s 1.790171508s 1.791788788s 1.795030933s 1.795346367s 1.798943188s 1.798965908s 1.800621952s 1.801269272s 1.8039998s 1.809839418s 1.811889533s 1.812337438s 1.813077792s 1.815216093s 1.820172126s 1.820204922s 1.82099793s 1.821045346s 1.821744968s 1.825391567s 1.828222896s 1.83054054s 1.834454237s 1.839712203s 1.842886922s 1.845194525s 1.847259513s 1.860730246s 1.862728403s 1.863832424s 1.875105271s 1.882542413s 1.901564553s 1.901756504s 1.903651367s 1.926430732s 1.928201162s 1.940612888s 1.942427388s 1.944837152s 1.957462562s 1.962052771s 1.967919315s 1.967964617s 1.983676256s 1.998034364s 2.005515265s 2.008902536s 2.025853207s 2.029601958s 2.039955183s 2.048443498s 2.054592759s 2.08536973s 2.113236181s 2.182506478s 2.217315107s 2.239851629s 2.242881934s 2.251573506s 2.293130039s 2.301860624s 2.336335238s 2.338900532s 2.343307929s 2.360810501s 2.364852483s 2.377175862s 2.380032892s 2.381075932s 2.381645685s 2.397550275s 2.399281407s 2.40501333s 2.406750483s 2.410672407s 2.413129193s 2.416651561s 2.422713499s 2.425390707s 2.437827041s 2.463872617s 2.464702105s 2.517061926s 2.539319607s 2.556999693s 2.562755674s 2.585523125s 2.624169815s 2.625544457s 2.634994823s 2.636118923s 2.638806049s 2.645540018s 2.650458494s 2.715467633s 2.755091345s] Dec 16 13:58:00.328: INFO: 50 %ile: 1.755488745s Dec 16 13:58:00.328: INFO: 90 %ile: 2.416651561s Dec 16 13:58:00.328: INFO: 99 %ile: 2.715467633s Dec 16 13:58:00.328: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 13:58:00.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-7690" for this suite. Dec 16 13:58:48.417: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 13:58:48.555: INFO: namespace svc-latency-7690 deletion completed in 48.165775607s • [SLOW TEST:80.198 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 13:58:48.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3069 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Dec 16 13:58:48.745: INFO: Found 0 stateful pods, waiting for 3 Dec 16 13:58:58.759: INFO: Found 2 stateful pods, waiting for 3 Dec 16 13:59:08.764: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 16 13:59:08.764: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 16 13:59:08.764: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Dec 16 13:59:18.758: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 16 13:59:18.758: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 16 13:59:18.759: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Dec 16 13:59:18.798: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Dec 16 13:59:28.884: INFO: Updating stateful set ss2 Dec 16 13:59:28.956: INFO: Waiting for Pod statefulset-3069/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Dec 16 13:59:39.465: INFO: Found 2 stateful pods, waiting for 3 Dec 16 13:59:49.486: INFO: Found 2 stateful pods, waiting for 3 Dec 16 13:59:59.478: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 16 13:59:59.478: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 16 13:59:59.478: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Dec 16 13:59:59.506: INFO: Updating stateful set ss2 Dec 16 13:59:59.570: INFO: Waiting for Pod statefulset-3069/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 16 14:00:09.614: INFO: Updating stateful set ss2 Dec 16 14:00:09.867: INFO: Waiting for StatefulSet statefulset-3069/ss2 to complete update Dec 16 14:00:09.868: INFO: Waiting for Pod statefulset-3069/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 16 14:00:19.902: INFO: Waiting for StatefulSet statefulset-3069/ss2 to complete update Dec 16 14:00:19.902: INFO: Waiting for Pod statefulset-3069/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 16 14:00:29.908: INFO: Waiting for StatefulSet statefulset-3069/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Dec 16 14:00:39.888: INFO: Deleting all statefulset in ns statefulset-3069 Dec 16 14:00:39.895: INFO: Scaling statefulset ss2 to 0 Dec 16 14:01:09.948: INFO: Waiting for statefulset status.replicas updated to 0 Dec 16 14:01:09.953: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 14:01:09.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3069" for this suite. Dec 16 14:01:18.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 14:01:18.187: INFO: namespace statefulset-3069 deletion completed in 8.161679065s • [SLOW TEST:149.631 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 14:01:18.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-748 STEP: creating a selector STEP: Creating the service pods in kubernetes Dec 16 14:01:18.312: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Dec 16 14:01:58.649: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-748 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 16 14:01:58.649: INFO: >>> kubeConfig: /root/.kube/config Dec 16 14:01:59.103: INFO: Waiting for endpoints: map[] Dec 16 14:01:59.112: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-748 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 16 14:01:59.112: INFO: >>> kubeConfig: /root/.kube/config Dec 16 14:01:59.713: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 14:01:59.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-748" for this suite. Dec 16 14:02:23.750: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 14:02:23.887: INFO: namespace pod-network-test-748 deletion completed in 24.161989267s • [SLOW TEST:65.698 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 14:02:23.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-bcbf4107-3cb2-4ff5-ad84-cd2cb66385a4 STEP: Creating a pod to test consume configMaps Dec 16 14:02:24.044: INFO: Waiting up to 5m0s for pod "pod-configmaps-9910f548-57d6-4c85-87b5-e54ada361ece" in namespace "configmap-1201" to be "success or failure" Dec 16 14:02:24.049: INFO: Pod "pod-configmaps-9910f548-57d6-4c85-87b5-e54ada361ece": Phase="Pending", Reason="", readiness=false. Elapsed: 4.72029ms Dec 16 14:02:26.057: INFO: Pod "pod-configmaps-9910f548-57d6-4c85-87b5-e54ada361ece": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012783855s Dec 16 14:02:28.066: INFO: Pod "pod-configmaps-9910f548-57d6-4c85-87b5-e54ada361ece": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021797584s Dec 16 14:02:30.074: INFO: Pod "pod-configmaps-9910f548-57d6-4c85-87b5-e54ada361ece": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028915995s Dec 16 14:02:32.084: INFO: Pod "pod-configmaps-9910f548-57d6-4c85-87b5-e54ada361ece": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.038912754s STEP: Saw pod success Dec 16 14:02:32.084: INFO: Pod "pod-configmaps-9910f548-57d6-4c85-87b5-e54ada361ece" satisfied condition "success or failure" Dec 16 14:02:32.106: INFO: Trying to get logs from node iruya-node pod pod-configmaps-9910f548-57d6-4c85-87b5-e54ada361ece container configmap-volume-test: STEP: delete the pod Dec 16 14:02:32.181: INFO: Waiting for pod pod-configmaps-9910f548-57d6-4c85-87b5-e54ada361ece to disappear Dec 16 14:02:32.189: INFO: Pod pod-configmaps-9910f548-57d6-4c85-87b5-e54ada361ece no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 16 14:02:32.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1201" for this suite. Dec 16 14:02:38.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 14:02:38.438: INFO: namespace configmap-1201 deletion completed in 6.157085107s • [SLOW TEST:14.551 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 16 14:02:38.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 16 14:02:38.642: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 32.837061ms)
Dec 16 14:02:38.653: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.834815ms)
Dec 16 14:02:38.659: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.403543ms)
Dec 16 14:02:38.664: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.746036ms)
Dec 16 14:02:38.668: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.352237ms)
Dec 16 14:02:39.690: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 1.021420214s)
Dec 16 14:02:39.710: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 19.599564ms)
Dec 16 14:02:39.723: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.761129ms)
Dec 16 14:02:39.738: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.865017ms)
Dec 16 14:02:39.756: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 18.018013ms)
Dec 16 14:02:39.767: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.814549ms)
Dec 16 14:02:39.817: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 49.948391ms)
Dec 16 14:02:39.831: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.288842ms)
Dec 16 14:02:39.837: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.955351ms)
Dec 16 14:02:39.858: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 21.007865ms)
Dec 16 14:02:39.870: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.218108ms)
Dec 16 14:02:39.884: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.280515ms)
Dec 16 14:02:39.897: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.07509ms)
Dec 16 14:02:39.904: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.715069ms)
Dec 16 14:02:39.912: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.361325ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:02:39.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-8592" for this suite.
Dec 16 14:02:45.977: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:02:46.072: INFO: namespace proxy-8592 deletion completed in 6.153722083s

• [SLOW TEST:7.633 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:02:46.072: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 16 14:02:46.140: INFO: Waiting up to 5m0s for pod "pod-3a7633e6-126b-4275-9da1-8b5dbc550fca" in namespace "emptydir-5836" to be "success or failure"
Dec 16 14:02:46.143: INFO: Pod "pod-3a7633e6-126b-4275-9da1-8b5dbc550fca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.912376ms
Dec 16 14:02:48.149: INFO: Pod "pod-3a7633e6-126b-4275-9da1-8b5dbc550fca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009143568s
Dec 16 14:02:50.154: INFO: Pod "pod-3a7633e6-126b-4275-9da1-8b5dbc550fca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014375319s
Dec 16 14:02:52.167: INFO: Pod "pod-3a7633e6-126b-4275-9da1-8b5dbc550fca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027160434s
Dec 16 14:02:54.177: INFO: Pod "pod-3a7633e6-126b-4275-9da1-8b5dbc550fca": Phase="Pending", Reason="", readiness=false. Elapsed: 8.036969882s
Dec 16 14:02:56.184: INFO: Pod "pod-3a7633e6-126b-4275-9da1-8b5dbc550fca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.044417055s
STEP: Saw pod success
Dec 16 14:02:56.184: INFO: Pod "pod-3a7633e6-126b-4275-9da1-8b5dbc550fca" satisfied condition "success or failure"
Dec 16 14:02:56.189: INFO: Trying to get logs from node iruya-node pod pod-3a7633e6-126b-4275-9da1-8b5dbc550fca container test-container: 
STEP: delete the pod
Dec 16 14:02:56.268: INFO: Waiting for pod pod-3a7633e6-126b-4275-9da1-8b5dbc550fca to disappear
Dec 16 14:02:56.276: INFO: Pod pod-3a7633e6-126b-4275-9da1-8b5dbc550fca no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:02:56.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5836" for this suite.
Dec 16 14:03:03.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:03:03.313: INFO: namespace emptydir-5836 deletion completed in 7.031105525s

• [SLOW TEST:17.242 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:03:03.314: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Dec 16 14:03:03.464: INFO: Waiting up to 5m0s for pod "downward-api-51435d0b-8e57-4c13-aa3b-6617db7ddb20" in namespace "downward-api-5830" to be "success or failure"
Dec 16 14:03:03.470: INFO: Pod "downward-api-51435d0b-8e57-4c13-aa3b-6617db7ddb20": Phase="Pending", Reason="", readiness=false. Elapsed: 6.286904ms
Dec 16 14:03:05.482: INFO: Pod "downward-api-51435d0b-8e57-4c13-aa3b-6617db7ddb20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017482665s
Dec 16 14:03:07.497: INFO: Pod "downward-api-51435d0b-8e57-4c13-aa3b-6617db7ddb20": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033313423s
Dec 16 14:03:09.506: INFO: Pod "downward-api-51435d0b-8e57-4c13-aa3b-6617db7ddb20": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041445548s
Dec 16 14:03:11.518: INFO: Pod "downward-api-51435d0b-8e57-4c13-aa3b-6617db7ddb20": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053402601s
Dec 16 14:03:13.526: INFO: Pod "downward-api-51435d0b-8e57-4c13-aa3b-6617db7ddb20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.062048368s
STEP: Saw pod success
Dec 16 14:03:13.526: INFO: Pod "downward-api-51435d0b-8e57-4c13-aa3b-6617db7ddb20" satisfied condition "success or failure"
Dec 16 14:03:13.531: INFO: Trying to get logs from node iruya-node pod downward-api-51435d0b-8e57-4c13-aa3b-6617db7ddb20 container dapi-container: 
STEP: delete the pod
Dec 16 14:03:13.617: INFO: Waiting for pod downward-api-51435d0b-8e57-4c13-aa3b-6617db7ddb20 to disappear
Dec 16 14:03:13.622: INFO: Pod downward-api-51435d0b-8e57-4c13-aa3b-6617db7ddb20 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:03:13.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5830" for this suite.
Dec 16 14:03:19.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:03:19.897: INFO: namespace downward-api-5830 deletion completed in 6.268190576s

• [SLOW TEST:16.583 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:03:19.897: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-6a27a2fc-cf45-43fe-9f0f-0766a626bc2f
STEP: Creating a pod to test consume configMaps
Dec 16 14:03:20.043: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-16621bf5-ff32-45d3-a082-c1a771b88caf" in namespace "projected-2728" to be "success or failure"
Dec 16 14:03:20.048: INFO: Pod "pod-projected-configmaps-16621bf5-ff32-45d3-a082-c1a771b88caf": Phase="Pending", Reason="", readiness=false. Elapsed: 5.082697ms
Dec 16 14:03:22.054: INFO: Pod "pod-projected-configmaps-16621bf5-ff32-45d3-a082-c1a771b88caf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011613022s
Dec 16 14:03:24.063: INFO: Pod "pod-projected-configmaps-16621bf5-ff32-45d3-a082-c1a771b88caf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019643447s
Dec 16 14:03:26.075: INFO: Pod "pod-projected-configmaps-16621bf5-ff32-45d3-a082-c1a771b88caf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032031483s
Dec 16 14:03:28.095: INFO: Pod "pod-projected-configmaps-16621bf5-ff32-45d3-a082-c1a771b88caf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052540558s
STEP: Saw pod success
Dec 16 14:03:28.096: INFO: Pod "pod-projected-configmaps-16621bf5-ff32-45d3-a082-c1a771b88caf" satisfied condition "success or failure"
Dec 16 14:03:28.099: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-16621bf5-ff32-45d3-a082-c1a771b88caf container projected-configmap-volume-test: 
STEP: delete the pod
Dec 16 14:03:28.194: INFO: Waiting for pod pod-projected-configmaps-16621bf5-ff32-45d3-a082-c1a771b88caf to disappear
Dec 16 14:03:28.198: INFO: Pod pod-projected-configmaps-16621bf5-ff32-45d3-a082-c1a771b88caf no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:03:28.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2728" for this suite.
Dec 16 14:03:34.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:03:34.342: INFO: namespace projected-2728 deletion completed in 6.139116343s

• [SLOW TEST:14.444 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:03:34.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 16 14:03:34.520: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d4700895-4777-4c12-9b6e-1be919039468" in namespace "projected-962" to be "success or failure"
Dec 16 14:03:34.529: INFO: Pod "downwardapi-volume-d4700895-4777-4c12-9b6e-1be919039468": Phase="Pending", Reason="", readiness=false. Elapsed: 8.46042ms
Dec 16 14:03:36.542: INFO: Pod "downwardapi-volume-d4700895-4777-4c12-9b6e-1be919039468": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021225625s
Dec 16 14:03:38.563: INFO: Pod "downwardapi-volume-d4700895-4777-4c12-9b6e-1be919039468": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04247495s
Dec 16 14:03:40.584: INFO: Pod "downwardapi-volume-d4700895-4777-4c12-9b6e-1be919039468": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06392739s
Dec 16 14:03:42.608: INFO: Pod "downwardapi-volume-d4700895-4777-4c12-9b6e-1be919039468": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.087741879s
STEP: Saw pod success
Dec 16 14:03:42.608: INFO: Pod "downwardapi-volume-d4700895-4777-4c12-9b6e-1be919039468" satisfied condition "success or failure"
Dec 16 14:03:42.624: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-d4700895-4777-4c12-9b6e-1be919039468 container client-container: 
STEP: delete the pod
Dec 16 14:03:42.731: INFO: Waiting for pod downwardapi-volume-d4700895-4777-4c12-9b6e-1be919039468 to disappear
Dec 16 14:03:42.737: INFO: Pod downwardapi-volume-d4700895-4777-4c12-9b6e-1be919039468 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:03:42.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-962" for this suite.
Dec 16 14:03:48.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:03:48.896: INFO: namespace projected-962 deletion completed in 6.155747373s

• [SLOW TEST:14.554 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:03:48.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-311f793f-6b05-46cf-b434-662e417d45e6
STEP: Creating a pod to test consume secrets
Dec 16 14:03:49.085: INFO: Waiting up to 5m0s for pod "pod-secrets-6ed8e5c2-cf33-45e6-b164-50b804722957" in namespace "secrets-4989" to be "success or failure"
Dec 16 14:03:49.173: INFO: Pod "pod-secrets-6ed8e5c2-cf33-45e6-b164-50b804722957": Phase="Pending", Reason="", readiness=false. Elapsed: 88.237164ms
Dec 16 14:03:51.185: INFO: Pod "pod-secrets-6ed8e5c2-cf33-45e6-b164-50b804722957": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100562716s
Dec 16 14:03:53.196: INFO: Pod "pod-secrets-6ed8e5c2-cf33-45e6-b164-50b804722957": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111581253s
Dec 16 14:03:55.205: INFO: Pod "pod-secrets-6ed8e5c2-cf33-45e6-b164-50b804722957": Phase="Pending", Reason="", readiness=false. Elapsed: 6.12046215s
Dec 16 14:03:57.213: INFO: Pod "pod-secrets-6ed8e5c2-cf33-45e6-b164-50b804722957": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.128742174s
STEP: Saw pod success
Dec 16 14:03:57.214: INFO: Pod "pod-secrets-6ed8e5c2-cf33-45e6-b164-50b804722957" satisfied condition "success or failure"
Dec 16 14:03:57.217: INFO: Trying to get logs from node iruya-node pod pod-secrets-6ed8e5c2-cf33-45e6-b164-50b804722957 container secret-volume-test: 
STEP: delete the pod
Dec 16 14:03:57.293: INFO: Waiting for pod pod-secrets-6ed8e5c2-cf33-45e6-b164-50b804722957 to disappear
Dec 16 14:03:57.300: INFO: Pod pod-secrets-6ed8e5c2-cf33-45e6-b164-50b804722957 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:03:57.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4989" for this suite.
Dec 16 14:04:03.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:04:03.478: INFO: namespace secrets-4989 deletion completed in 6.173793038s

• [SLOW TEST:14.581 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:04:03.478: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:04:35.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-8780" for this suite.
Dec 16 14:04:41.971: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:04:42.316: INFO: namespace namespaces-8780 deletion completed in 6.369283195s
STEP: Destroying namespace "nsdeletetest-8135" for this suite.
Dec 16 14:04:42.319: INFO: Namespace nsdeletetest-8135 was already deleted
STEP: Destroying namespace "nsdeletetest-1035" for this suite.
Dec 16 14:04:48.439: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:04:48.567: INFO: namespace nsdeletetest-1035 deletion completed in 6.247753551s

• [SLOW TEST:45.089 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:04:48.567: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-ee7a72dc-aa65-4d99-8139-5a3b1d87b7f3
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-ee7a72dc-aa65-4d99-8139-5a3b1d87b7f3
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:06:08.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6458" for this suite.
Dec 16 14:06:30.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:06:30.215: INFO: namespace configmap-6458 deletion completed in 22.162946619s

• [SLOW TEST:101.648 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:06:30.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Dec 16 14:06:30.399: INFO: Number of nodes with available pods: 0
Dec 16 14:06:30.399: INFO: Node iruya-node is running more than one daemon pod
Dec 16 14:06:31.419: INFO: Number of nodes with available pods: 0
Dec 16 14:06:31.419: INFO: Node iruya-node is running more than one daemon pod
Dec 16 14:06:32.432: INFO: Number of nodes with available pods: 0
Dec 16 14:06:32.432: INFO: Node iruya-node is running more than one daemon pod
Dec 16 14:06:33.418: INFO: Number of nodes with available pods: 0
Dec 16 14:06:33.418: INFO: Node iruya-node is running more than one daemon pod
Dec 16 14:06:34.424: INFO: Number of nodes with available pods: 0
Dec 16 14:06:34.424: INFO: Node iruya-node is running more than one daemon pod
Dec 16 14:06:35.422: INFO: Number of nodes with available pods: 0
Dec 16 14:06:35.422: INFO: Node iruya-node is running more than one daemon pod
Dec 16 14:06:37.792: INFO: Number of nodes with available pods: 0
Dec 16 14:06:37.792: INFO: Node iruya-node is running more than one daemon pod
Dec 16 14:06:38.900: INFO: Number of nodes with available pods: 0
Dec 16 14:06:38.900: INFO: Node iruya-node is running more than one daemon pod
Dec 16 14:06:39.436: INFO: Number of nodes with available pods: 0
Dec 16 14:06:39.436: INFO: Node iruya-node is running more than one daemon pod
Dec 16 14:06:40.415: INFO: Number of nodes with available pods: 1
Dec 16 14:06:40.415: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 16 14:06:41.416: INFO: Number of nodes with available pods: 2
Dec 16 14:06:41.416: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Dec 16 14:06:41.526: INFO: Number of nodes with available pods: 1
Dec 16 14:06:41.526: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 16 14:06:42.901: INFO: Number of nodes with available pods: 1
Dec 16 14:06:42.901: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 16 14:06:43.541: INFO: Number of nodes with available pods: 1
Dec 16 14:06:43.541: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 16 14:06:44.554: INFO: Number of nodes with available pods: 1
Dec 16 14:06:44.554: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 16 14:06:45.547: INFO: Number of nodes with available pods: 1
Dec 16 14:06:45.548: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 16 14:06:46.950: INFO: Number of nodes with available pods: 1
Dec 16 14:06:46.950: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 16 14:06:47.539: INFO: Number of nodes with available pods: 1
Dec 16 14:06:47.539: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 16 14:06:48.667: INFO: Number of nodes with available pods: 1
Dec 16 14:06:48.667: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 16 14:06:49.538: INFO: Number of nodes with available pods: 1
Dec 16 14:06:49.538: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 16 14:06:50.554: INFO: Number of nodes with available pods: 1
Dec 16 14:06:50.555: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 16 14:06:51.579: INFO: Number of nodes with available pods: 2
Dec 16 14:06:51.579: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1239, will wait for the garbage collector to delete the pods
Dec 16 14:06:51.662: INFO: Deleting DaemonSet.extensions daemon-set took: 20.992405ms
Dec 16 14:06:51.963: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.591654ms
Dec 16 14:07:06.676: INFO: Number of nodes with available pods: 0
Dec 16 14:07:06.676: INFO: Number of running nodes: 0, number of available pods: 0
Dec 16 14:07:06.684: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1239/daemonsets","resourceVersion":"16895266"},"items":null}

Dec 16 14:07:06.689: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1239/pods","resourceVersion":"16895266"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:07:06.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1239" for this suite.
Dec 16 14:07:12.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:07:12.853: INFO: namespace daemonsets-1239 deletion completed in 6.109268775s

• [SLOW TEST:42.638 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:07:12.853: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-7c7c09be-9770-441b-8bdc-b083b2014448
STEP: Creating a pod to test consume configMaps
Dec 16 14:07:13.036: INFO: Waiting up to 5m0s for pod "pod-configmaps-c32d2f6f-afaa-4d30-8ad8-e5759278727a" in namespace "configmap-4086" to be "success or failure"
Dec 16 14:07:13.040: INFO: Pod "pod-configmaps-c32d2f6f-afaa-4d30-8ad8-e5759278727a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.211597ms
Dec 16 14:07:15.047: INFO: Pod "pod-configmaps-c32d2f6f-afaa-4d30-8ad8-e5759278727a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01025601s
Dec 16 14:07:17.061: INFO: Pod "pod-configmaps-c32d2f6f-afaa-4d30-8ad8-e5759278727a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024007545s
Dec 16 14:07:19.067: INFO: Pod "pod-configmaps-c32d2f6f-afaa-4d30-8ad8-e5759278727a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030334912s
Dec 16 14:07:21.078: INFO: Pod "pod-configmaps-c32d2f6f-afaa-4d30-8ad8-e5759278727a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.041331476s
STEP: Saw pod success
Dec 16 14:07:21.078: INFO: Pod "pod-configmaps-c32d2f6f-afaa-4d30-8ad8-e5759278727a" satisfied condition "success or failure"
Dec 16 14:07:21.093: INFO: Trying to get logs from node iruya-node pod pod-configmaps-c32d2f6f-afaa-4d30-8ad8-e5759278727a container configmap-volume-test: 
STEP: delete the pod
Dec 16 14:07:21.138: INFO: Waiting for pod pod-configmaps-c32d2f6f-afaa-4d30-8ad8-e5759278727a to disappear
Dec 16 14:07:21.154: INFO: Pod pod-configmaps-c32d2f6f-afaa-4d30-8ad8-e5759278727a no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:07:21.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4086" for this suite.
Dec 16 14:07:29.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:07:29.485: INFO: namespace configmap-4086 deletion completed in 8.324052343s

• [SLOW TEST:16.632 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:07:29.485: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-669aecd4-a09b-456c-b98c-9a6ef2c899a4
STEP: Creating a pod to test consume configMaps
Dec 16 14:07:29.615: INFO: Waiting up to 5m0s for pod "pod-configmaps-7fe4fd27-6ec6-4ac0-980a-959f93a44009" in namespace "configmap-7341" to be "success or failure"
Dec 16 14:07:29.630: INFO: Pod "pod-configmaps-7fe4fd27-6ec6-4ac0-980a-959f93a44009": Phase="Pending", Reason="", readiness=false. Elapsed: 14.86374ms
Dec 16 14:07:31.637: INFO: Pod "pod-configmaps-7fe4fd27-6ec6-4ac0-980a-959f93a44009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021812254s
Dec 16 14:07:33.679: INFO: Pod "pod-configmaps-7fe4fd27-6ec6-4ac0-980a-959f93a44009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063992973s
Dec 16 14:07:35.690: INFO: Pod "pod-configmaps-7fe4fd27-6ec6-4ac0-980a-959f93a44009": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074720588s
Dec 16 14:07:37.705: INFO: Pod "pod-configmaps-7fe4fd27-6ec6-4ac0-980a-959f93a44009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.089408726s
STEP: Saw pod success
Dec 16 14:07:37.705: INFO: Pod "pod-configmaps-7fe4fd27-6ec6-4ac0-980a-959f93a44009" satisfied condition "success or failure"
Dec 16 14:07:37.709: INFO: Trying to get logs from node iruya-node pod pod-configmaps-7fe4fd27-6ec6-4ac0-980a-959f93a44009 container configmap-volume-test: 
STEP: delete the pod
Dec 16 14:07:37.890: INFO: Waiting for pod pod-configmaps-7fe4fd27-6ec6-4ac0-980a-959f93a44009 to disappear
Dec 16 14:07:37.902: INFO: Pod pod-configmaps-7fe4fd27-6ec6-4ac0-980a-959f93a44009 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:07:37.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7341" for this suite.
Dec 16 14:07:43.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:07:44.082: INFO: namespace configmap-7341 deletion completed in 6.173855658s

• [SLOW TEST:14.597 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:07:44.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:07:52.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8510" for this suite.
Dec 16 14:07:58.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:07:58.521: INFO: namespace kubelet-test-8510 deletion completed in 6.194981357s

• [SLOW TEST:14.438 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:07:58.521: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 16 14:07:58.670: INFO: Waiting up to 5m0s for pod "pod-ffefc431-cb87-4487-a987-5ab3684c21f4" in namespace "emptydir-291" to be "success or failure"
Dec 16 14:07:58.683: INFO: Pod "pod-ffefc431-cb87-4487-a987-5ab3684c21f4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.645368ms
Dec 16 14:08:00.695: INFO: Pod "pod-ffefc431-cb87-4487-a987-5ab3684c21f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024780277s
Dec 16 14:08:02.732: INFO: Pod "pod-ffefc431-cb87-4487-a987-5ab3684c21f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061534399s
Dec 16 14:08:04.743: INFO: Pod "pod-ffefc431-cb87-4487-a987-5ab3684c21f4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072128827s
Dec 16 14:08:06.750: INFO: Pod "pod-ffefc431-cb87-4487-a987-5ab3684c21f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.079799154s
STEP: Saw pod success
Dec 16 14:08:06.750: INFO: Pod "pod-ffefc431-cb87-4487-a987-5ab3684c21f4" satisfied condition "success or failure"
Dec 16 14:08:06.756: INFO: Trying to get logs from node iruya-node pod pod-ffefc431-cb87-4487-a987-5ab3684c21f4 container test-container: 
STEP: delete the pod
Dec 16 14:08:06.834: INFO: Waiting for pod pod-ffefc431-cb87-4487-a987-5ab3684c21f4 to disappear
Dec 16 14:08:06.845: INFO: Pod pod-ffefc431-cb87-4487-a987-5ab3684c21f4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:08:06.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-291" for this suite.
Dec 16 14:08:12.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:08:12.987: INFO: namespace emptydir-291 deletion completed in 6.133215652s

• [SLOW TEST:14.465 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:08:12.987: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 16 14:08:13.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-2164'
Dec 16 14:08:15.112: INFO: stderr: ""
Dec 16 14:08:15.112: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Dec 16 14:08:15.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-2164'
Dec 16 14:08:20.142: INFO: stderr: ""
Dec 16 14:08:20.142: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:08:20.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2164" for this suite.
Dec 16 14:08:26.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:08:26.352: INFO: namespace kubectl-2164 deletion completed in 6.20117678s

• [SLOW TEST:13.365 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:08:26.352: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W1216 14:08:57.041528       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 16 14:08:57.041: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:08:57.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7444" for this suite.
Dec 16 14:09:03.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:09:04.446: INFO: namespace gc-7444 deletion completed in 7.37404313s

• [SLOW TEST:38.094 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:09:04.447: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W1216 14:09:45.899868       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 16 14:09:45.900: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:09:45.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4154" for this suite.
Dec 16 14:10:05.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:10:06.098: INFO: namespace gc-4154 deletion completed in 20.188202324s

• [SLOW TEST:61.652 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:10:06.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 16 14:10:06.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Dec 16 14:10:06.683: INFO: stderr: ""
Dec 16 14:10:06.683: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-14T21:37:43Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:10:06.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6055" for this suite.
Dec 16 14:10:12.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:10:12.895: INFO: namespace kubectl-6055 deletion completed in 6.206264975s

• [SLOW TEST:6.797 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:10:12.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 16 14:10:21.226: INFO: Waiting up to 5m0s for pod "client-envvars-afe8b52f-ccca-456f-9e0b-7f902c9e816d" in namespace "pods-6731" to be "success or failure"
Dec 16 14:10:21.279: INFO: Pod "client-envvars-afe8b52f-ccca-456f-9e0b-7f902c9e816d": Phase="Pending", Reason="", readiness=false. Elapsed: 52.597204ms
Dec 16 14:10:23.286: INFO: Pod "client-envvars-afe8b52f-ccca-456f-9e0b-7f902c9e816d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060128173s
Dec 16 14:10:25.295: INFO: Pod "client-envvars-afe8b52f-ccca-456f-9e0b-7f902c9e816d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068611282s
Dec 16 14:10:27.302: INFO: Pod "client-envvars-afe8b52f-ccca-456f-9e0b-7f902c9e816d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075486928s
Dec 16 14:10:29.328: INFO: Pod "client-envvars-afe8b52f-ccca-456f-9e0b-7f902c9e816d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.101988207s
Dec 16 14:10:31.336: INFO: Pod "client-envvars-afe8b52f-ccca-456f-9e0b-7f902c9e816d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.110070799s
STEP: Saw pod success
Dec 16 14:10:31.336: INFO: Pod "client-envvars-afe8b52f-ccca-456f-9e0b-7f902c9e816d" satisfied condition "success or failure"
Dec 16 14:10:31.340: INFO: Trying to get logs from node iruya-node pod client-envvars-afe8b52f-ccca-456f-9e0b-7f902c9e816d container env3cont: 
STEP: delete the pod
Dec 16 14:10:31.386: INFO: Waiting for pod client-envvars-afe8b52f-ccca-456f-9e0b-7f902c9e816d to disappear
Dec 16 14:10:31.482: INFO: Pod client-envvars-afe8b52f-ccca-456f-9e0b-7f902c9e816d no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:10:31.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6731" for this suite.
Dec 16 14:11:13.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:11:13.640: INFO: namespace pods-6731 deletion completed in 42.153128785s

• [SLOW TEST:60.744 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:11:13.640: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Dec 16 14:11:23.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-76686edf-addd-4838-ba26-c9d2912baa84 -c busybox-main-container --namespace=emptydir-9861 -- cat /usr/share/volumeshare/shareddata.txt'
Dec 16 14:11:24.536: INFO: stderr: ""
Dec 16 14:11:24.536: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:11:24.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9861" for this suite.
Dec 16 14:11:30.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:11:30.676: INFO: namespace emptydir-9861 deletion completed in 6.127081361s

• [SLOW TEST:17.035 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:11:30.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 16 14:11:30.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-8623'
Dec 16 14:11:30.973: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 16 14:11:30.973: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Dec 16 14:11:35.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-8623'
Dec 16 14:11:35.983: INFO: stderr: ""
Dec 16 14:11:35.984: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:11:35.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8623" for this suite.
Dec 16 14:11:42.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:11:42.184: INFO: namespace kubectl-8623 deletion completed in 6.140979816s

• [SLOW TEST:11.508 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:11:42.185: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 16 14:11:42.398: INFO: Waiting up to 5m0s for pod "downwardapi-volume-82b5ecea-5453-420e-ae93-abd78afb84a9" in namespace "downward-api-2601" to be "success or failure"
Dec 16 14:11:42.436: INFO: Pod "downwardapi-volume-82b5ecea-5453-420e-ae93-abd78afb84a9": Phase="Pending", Reason="", readiness=false. Elapsed: 37.563953ms
Dec 16 14:11:44.492: INFO: Pod "downwardapi-volume-82b5ecea-5453-420e-ae93-abd78afb84a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093794238s
Dec 16 14:11:46.506: INFO: Pod "downwardapi-volume-82b5ecea-5453-420e-ae93-abd78afb84a9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107713825s
Dec 16 14:11:48.525: INFO: Pod "downwardapi-volume-82b5ecea-5453-420e-ae93-abd78afb84a9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.127302576s
Dec 16 14:11:50.574: INFO: Pod "downwardapi-volume-82b5ecea-5453-420e-ae93-abd78afb84a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.176357976s
STEP: Saw pod success
Dec 16 14:11:50.575: INFO: Pod "downwardapi-volume-82b5ecea-5453-420e-ae93-abd78afb84a9" satisfied condition "success or failure"
Dec 16 14:11:50.580: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-82b5ecea-5453-420e-ae93-abd78afb84a9 container client-container: 
STEP: delete the pod
Dec 16 14:11:51.025: INFO: Waiting for pod downwardapi-volume-82b5ecea-5453-420e-ae93-abd78afb84a9 to disappear
Dec 16 14:11:51.033: INFO: Pod downwardapi-volume-82b5ecea-5453-420e-ae93-abd78afb84a9 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:11:51.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2601" for this suite.
Dec 16 14:11:57.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:11:57.336: INFO: namespace downward-api-2601 deletion completed in 6.293829693s

• [SLOW TEST:15.151 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:11:57.337: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-7227
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-7227
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7227
Dec 16 14:11:57.429: INFO: Found 0 stateful pods, waiting for 1
Dec 16 14:12:07.439: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Dec 16 14:12:07.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 16 14:12:08.033: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 16 14:12:08.033: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 16 14:12:08.033: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 16 14:12:08.040: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Dec 16 14:12:18.048: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 16 14:12:18.048: INFO: Waiting for statefulset status.replicas updated to 0
Dec 16 14:12:18.087: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Dec 16 14:12:18.088: INFO: ss-0  iruya-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:11:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:11:57 +0000 UTC  }]
Dec 16 14:12:18.088: INFO: 
Dec 16 14:12:18.088: INFO: StatefulSet ss has not reached scale 3, at 1
Dec 16 14:12:19.708: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.972023141s
Dec 16 14:12:20.721: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.351819363s
Dec 16 14:12:21.745: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.339469519s
Dec 16 14:12:22.779: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.314088609s
Dec 16 14:12:24.942: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.280200219s
Dec 16 14:12:26.013: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.117870418s
Dec 16 14:12:27.629: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.047424066s
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7227
Dec 16 14:12:28.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 14:12:29.130: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 16 14:12:29.130: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 16 14:12:29.131: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 16 14:12:29.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 14:12:29.759: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n"
Dec 16 14:12:29.760: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 16 14:12:29.760: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 16 14:12:29.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 14:12:30.314: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n"
Dec 16 14:12:30.314: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 16 14:12:30.314: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 16 14:12:30.324: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 16 14:12:30.324: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 16 14:12:30.324: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Dec 16 14:12:30.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 16 14:12:30.976: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 16 14:12:30.976: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 16 14:12:30.976: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 16 14:12:30.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 16 14:12:31.543: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 16 14:12:31.543: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 16 14:12:31.543: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 16 14:12:31.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 16 14:12:31.999: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 16 14:12:32.000: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 16 14:12:32.000: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 16 14:12:32.000: INFO: Waiting for statefulset status.replicas updated to 0
Dec 16 14:12:32.007: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Dec 16 14:12:42.072: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 16 14:12:42.072: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Dec 16 14:12:42.072: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Dec 16 14:12:42.356: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 16 14:12:42.356: INFO: ss-0  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:11:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:11:57 +0000 UTC  }]
Dec 16 14:12:42.356: INFO: ss-1  iruya-server-sfge57q7djm7  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:18 +0000 UTC  }]
Dec 16 14:12:42.356: INFO: ss-2  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:18 +0000 UTC  }]
Dec 16 14:12:42.356: INFO: 
Dec 16 14:12:42.356: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 16 14:12:43.736: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 16 14:12:43.736: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:11:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:11:57 +0000 UTC  }]
Dec 16 14:12:43.737: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:18 +0000 UTC  }]
Dec 16 14:12:43.737: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:18 +0000 UTC  }]
Dec 16 14:12:43.737: INFO: 
Dec 16 14:12:43.737: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 16 14:12:44.746: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 16 14:12:44.746: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:11:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:11:57 +0000 UTC  }]
Dec 16 14:12:44.746: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:18 +0000 UTC  }]
Dec 16 14:12:44.746: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:18 +0000 UTC  }]
Dec 16 14:12:44.746: INFO: 
Dec 16 14:12:44.746: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 16 14:12:45.757: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 16 14:12:45.757: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:11:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:11:57 +0000 UTC  }]
Dec 16 14:12:45.757: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:18 +0000 UTC  }]
Dec 16 14:12:45.757: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:18 +0000 UTC  }]
Dec 16 14:12:45.757: INFO: 
Dec 16 14:12:45.757: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 16 14:12:46.776: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 16 14:12:46.776: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:11:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:11:57 +0000 UTC  }]
Dec 16 14:12:46.776: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:18 +0000 UTC  }]
Dec 16 14:12:46.776: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:18 +0000 UTC  }]
Dec 16 14:12:46.777: INFO: 
Dec 16 14:12:46.777: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 16 14:12:47.890: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 16 14:12:47.890: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:11:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:11:57 +0000 UTC  }]
Dec 16 14:12:47.890: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:18 +0000 UTC  }]
Dec 16 14:12:47.890: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:18 +0000 UTC  }]
Dec 16 14:12:47.890: INFO: 
Dec 16 14:12:47.890: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 16 14:12:48.902: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Dec 16 14:12:48.902: INFO: ss-0  iruya-node  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:11:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:11:57 +0000 UTC  }]
Dec 16 14:12:48.903: INFO: ss-2  iruya-node  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:18 +0000 UTC  }]
Dec 16 14:12:48.903: INFO: 
Dec 16 14:12:48.903: INFO: StatefulSet ss has not reached scale 0, at 2
Dec 16 14:12:49.910: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Dec 16 14:12:49.910: INFO: ss-0  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:11:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:11:57 +0000 UTC  }]
Dec 16 14:12:49.910: INFO: ss-2  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:18 +0000 UTC  }]
Dec 16 14:12:49.910: INFO: 
Dec 16 14:12:49.910: INFO: StatefulSet ss has not reached scale 0, at 2
Dec 16 14:12:50.949: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Dec 16 14:12:50.949: INFO: ss-0  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:11:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:11:57 +0000 UTC  }]
Dec 16 14:12:50.949: INFO: ss-2  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:18 +0000 UTC  }]
Dec 16 14:12:50.949: INFO: 
Dec 16 14:12:50.949: INFO: StatefulSet ss has not reached scale 0, at 2
Dec 16 14:12:51.995: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Dec 16 14:12:51.995: INFO: ss-0  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:11:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:11:57 +0000 UTC  }]
Dec 16 14:12:51.995: INFO: ss-2  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:12:18 +0000 UTC  }]
Dec 16 14:12:51.995: INFO: 
Dec 16 14:12:51.995: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7227
Dec 16 14:12:53.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 14:12:53.228: INFO: rc: 1
Dec 16 14:12:53.229: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc002c9b680 exit status 1   true [0xc000889c20 0xc000889c60 0xc000889c90] [0xc000889c20 0xc000889c60 0xc000889c90] [0xc000889c48 0xc000889c88] [0xba6c50 0xba6c50] 0xc002e12de0 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Dec 16 14:13:03.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 14:13:03.384: INFO: rc: 1
Dec 16 14:13:03.384: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002c9b740 exit status 1   true [0xc000889c98 0xc000889cc8 0xc000889d00] [0xc000889c98 0xc000889cc8 0xc000889d00] [0xc000889cb8 0xc000889ce0] [0xba6c50 0xba6c50] 0xc002e13200 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 16 14:13:13.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 14:13:13.570: INFO: rc: 1
Dec 16 14:13:13.570: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002e0c090 exit status 1   true [0xc0001a10a0 0xc0001a11c0 0xc0001a1270] [0xc0001a10a0 0xc0001a11c0 0xc0001a1270] [0xc0001a1190 0xc0001a1248] [0xba6c50 0xba6c50] 0xc00292f080 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 16 14:13:23.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 14:13:23.751: INFO: rc: 1
Dec 16 14:13:23.751: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002e0c150 exit status 1   true [0xc0001a12d0 0xc0001a1380 0xc0001a1410] [0xc0001a12d0 0xc0001a1380 0xc0001a1410] [0xc0001a1368 0xc0001a1408] [0xba6c50 0xba6c50] 0xc00292f4a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 16 14:13:33.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 14:13:34.020: INFO: rc: 1
Dec 16 14:13:34.021: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002e0c210 exit status 1   true [0xc0001a1418 0xc0001a1458 0xc0001a14a8] [0xc0001a1418 0xc0001a1458 0xc0001a14a8] [0xc0001a1448 0xc0001a1498] [0xba6c50 0xba6c50] 0xc00292f860 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 16 14:13:44.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 14:13:44.139: INFO: rc: 1
Dec 16 14:13:44.139: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002e0c300 exit status 1   true [0xc0001a14f8 0xc0001a1528 0xc0001a15b0] [0xc0001a14f8 0xc0001a1528 0xc0001a15b0] [0xc0001a1518 0xc0001a1578] [0xba6c50 0xba6c50] 0xc00292fbc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 16 14:13:54.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 14:13:54.285: INFO: rc: 1
Dec 16 14:13:54.285: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002c74090 exit status 1   true [0xc000010060 0xc0008681c0 0xc000868340] [0xc000010060 0xc0008681c0 0xc000868340] [0xc0000ebf88 0xc0008682a8] [0xba6c50 0xba6c50] 0xc00286e360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 16 14:14:04.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 14:14:04.419: INFO: rc: 1
Dec 16 14:14:04.419: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0029040c0 exit status 1   true [0xc002ad6000 0xc002ad6018 0xc002ad6030] [0xc002ad6000 0xc002ad6018 0xc002ad6030] [0xc002ad6010 0xc002ad6028] [0xba6c50 0xba6c50] 0xc0020304e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 16 14:14:14.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 14:14:14.653: INFO: rc: 1
Dec 16 14:14:14.654: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002c74180 exit status 1   true [0xc000868350 0xc000868558 0xc000868670] [0xc000868350 0xc000868558 0xc000868670] [0xc000868470 0xc0008685b0] [0xba6c50 0xba6c50] 0xc00286e840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 16 14:14:24.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 14:14:24.797: INFO: rc: 1
Dec 16 14:14:24.797: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0026960f0 exit status 1   true [0xc001dfa000 0xc001dfa018 0xc001dfa030] [0xc001dfa000 0xc001dfa018 0xc001dfa030] [0xc001dfa010 0xc001dfa028] [0xba6c50 0xba6c50] 0xc0024e6ae0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 16 14:14:34.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 14:14:34.939: INFO: rc: 1
Dec 16 14:14:34.940: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0026961b0 exit status 1   true [0xc001dfa038 0xc001dfa050 0xc001dfa068] [0xc001dfa038 0xc001dfa050 0xc001dfa068] [0xc001dfa048 0xc001dfa060] [0xba6c50 0xba6c50] 0xc0024e7920 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 16 14:14:44.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 14:14:45.108: INFO: rc: 1
Dec 16 14:14:45.109: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002e0c420 exit status 1   true [0xc0001a15e0 0xc0001a1670 0xc0001a1698] [0xc0001a15e0 0xc0001a1670 0xc0001a1698] [0xc0001a1610 0xc0001a1688] [0xba6c50 0xba6c50] 0xc00292ff80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 16 14:14:55.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 14:14:55.278: INFO: rc: 1
Dec 16 14:14:55.278: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002e0c510 exit status 1   true [0xc0001a16a8 0xc0001a1718 0xc0001a1780] [0xc0001a16a8 0xc0001a1718 0xc0001a1780] [0xc0001a16d0 0xc0001a1740] [0xba6c50 0xba6c50] 0xc002a32300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 16 14:15:05.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 14:15:05.455: INFO: rc: 1
Dec 16 14:15:05.455: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0029041b0 exit status 1   true [0xc002ad6038 0xc002ad6068 0xc002ad6090] [0xc002ad6038 0xc002ad6068 0xc002ad6090] [0xc002ad6058 0xc002ad6088] [0xba6c50 0xba6c50] 0xc002030e40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 16 14:15:15.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 14:15:15.576: INFO: rc: 1
Dec 16 14:15:15.576: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002c740c0 exit status 1   true [0xc0000ebf88 0xc000868228 0xc000868350] [0xc0000ebf88 0xc000868228 0xc000868350] [0xc0008681c0 0xc000868340] [0xba6c50 0xba6c50] 0xc00292f080 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 16 14:15:25.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 14:15:25.754: INFO: rc: 1
Dec 16 14:15:25.755: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002c741b0 exit status 1   true [0xc000868360 0xc000868590 0xc000868800] [0xc000868360 0xc000868590 0xc000868800] [0xc000868558 0xc000868670] [0xba6c50 0xba6c50] 0xc00292f4a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 16 14:15:35.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 14:15:36.032: INFO: rc: 1
Dec 16 14:15:36.032: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0029040f0 exit status 1   true [0xc0001a0000 0xc0001a1190 0xc0001a1248] [0xc0001a0000 0xc0001a1190 0xc0001a1248] [0xc0001a1140 0xc0001a1220] [0xba6c50 0xba6c50] 0xc00286e360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 16 14:15:46.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 14:15:46.156: INFO: rc: 1
Dec 16 14:15:46.156: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002c742a0 exit status 1   true [0xc000868818 0xc0008688b8 0xc000868998] [0xc000868818 0xc0008688b8 0xc000868998] [0xc0008688a8 0xc000868970] [0xba6c50 0xba6c50] 0xc00292f860 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 16 14:15:56.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 14:15:56.311: INFO: rc: 1
Dec 16 14:15:56.311: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002e0c0f0 exit status 1   true [0xc002ad6000 0xc002ad6018 0xc002ad6030] [0xc002ad6000 0xc002ad6018 0xc002ad6030] [0xc002ad6010 0xc002ad6028] [0xba6c50 0xba6c50] 0xc002a322a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 16 14:16:06.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 14:16:06.465: INFO: rc: 1
Dec 16 14:16:06.465: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002c74390 exit status 1   true [0xc000868a48 0xc000868dd8 0xc000868f98] [0xc000868a48 0xc000868dd8 0xc000868f98] [0xc000868c80 0xc000868f38] [0xba6c50 0xba6c50] 0xc00292fbc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 16 14:16:16.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 14:16:16.643: INFO: rc: 1
Dec 16 14:16:16.643: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002696090 exit status 1   true [0xc001dfa000 0xc001dfa018 0xc001dfa030] [0xc001dfa000 0xc001dfa018 0xc001dfa030] [0xc001dfa010 0xc001dfa028] [0xba6c50 0xba6c50] 0xc0020304e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 16 14:16:26.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 14:16:26.924: INFO: rc: 1
Dec 16 14:16:26.924: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002696180 exit status 1   true [0xc001dfa038 0xc001dfa050 0xc001dfa068] [0xc001dfa038 0xc001dfa050 0xc001dfa068] [0xc001dfa048 0xc001dfa060] [0xba6c50 0xba6c50] 0xc002030e40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 16 14:16:36.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 14:16:37.073: INFO: rc: 1
Dec 16 14:16:37.073: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0026962a0 exit status 1   true [0xc001dfa070 0xc001dfa088 0xc001dfa0a0] [0xc001dfa070 0xc001dfa088 0xc001dfa0a0] [0xc001dfa080 0xc001dfa098] [0xba6c50 0xba6c50] 0xc0020314a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 16 14:16:47.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 14:16:47.262: INFO: rc: 1
Dec 16 14:16:47.262: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002e0c270 exit status 1   true [0xc002ad6038 0xc002ad6068 0xc002ad6090] [0xc002ad6038 0xc002ad6068 0xc002ad6090] [0xc002ad6058 0xc002ad6088] [0xba6c50 0xba6c50] 0xc002a32600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 16 14:16:57.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 14:16:57.467: INFO: rc: 1
Dec 16 14:16:57.468: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002e0c390 exit status 1   true [0xc002ad6098 0xc002ad60b0 0xc002ad60c8] [0xc002ad6098 0xc002ad60b0 0xc002ad60c8] [0xc002ad60a8 0xc002ad60c0] [0xba6c50 0xba6c50] 0xc002a32960 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 16 14:17:07.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 14:17:07.643: INFO: rc: 1
Dec 16 14:17:07.643: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002696360 exit status 1   true [0xc001dfa0a8 0xc001dfa0c0 0xc001dfa0d8] [0xc001dfa0a8 0xc001dfa0c0 0xc001dfa0d8] [0xc001dfa0b8 0xc001dfa0d0] [0xba6c50 0xba6c50] 0xc002031a40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 16 14:17:17.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 14:17:17.806: INFO: rc: 1
Dec 16 14:17:17.807: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002904090 exit status 1   true [0xc0000ea590 0xc0001a10a0 0xc0001a11c0] [0xc0000ea590 0xc0001a10a0 0xc0001a11c0] [0xc0001a0000 0xc0001a1190] [0xba6c50 0xba6c50] 0xc00286e360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 16 14:17:27.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 14:17:27.984: INFO: rc: 1
Dec 16 14:17:27.984: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0026960c0 exit status 1   true [0xc0008681c0 0xc000868340 0xc000868470] [0xc0008681c0 0xc000868340 0xc000868470] [0xc0008682a8 0xc000868360] [0xba6c50 0xba6c50] 0xc00292f080 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 16 14:17:37.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 14:17:38.139: INFO: rc: 1
Dec 16 14:17:38.139: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002c74120 exit status 1   true [0xc001dfa000 0xc001dfa018 0xc001dfa030] [0xc001dfa000 0xc001dfa018 0xc001dfa030] [0xc001dfa010 0xc001dfa028] [0xba6c50 0xba6c50] 0xc0020304e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 16 14:17:48.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 14:17:48.314: INFO: rc: 1
Dec 16 14:17:48.314: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002c74240 exit status 1   true [0xc001dfa038 0xc001dfa050 0xc001dfa068] [0xc001dfa038 0xc001dfa050 0xc001dfa068] [0xc001dfa048 0xc001dfa060] [0xba6c50 0xba6c50] 0xc002030e40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 16 14:17:58.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7227 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 14:17:58.485: INFO: rc: 1
Dec 16 14:17:58.486: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Dec 16 14:17:58.486: INFO: Scaling statefulset ss to 0
Dec 16 14:17:58.506: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 16 14:17:58.508: INFO: Deleting all statefulset in ns statefulset-7227
Dec 16 14:17:58.511: INFO: Scaling statefulset ss to 0
Dec 16 14:17:58.521: INFO: Waiting for statefulset status.replicas updated to 0
Dec 16 14:17:58.523: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:17:58.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7227" for this suite.
Dec 16 14:18:04.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:18:04.683: INFO: namespace statefulset-7227 deletion completed in 6.137489798s

• [SLOW TEST:367.347 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:18:04.684: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4461.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4461.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 16 14:18:18.894: INFO: Unable to read wheezy_udp@PodARecord from pod dns-4461/dns-test-878422c7-8fab-4d72-b232-585ed988dde9: the server could not find the requested resource (get pods dns-test-878422c7-8fab-4d72-b232-585ed988dde9)
Dec 16 14:18:18.904: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-4461/dns-test-878422c7-8fab-4d72-b232-585ed988dde9: the server could not find the requested resource (get pods dns-test-878422c7-8fab-4d72-b232-585ed988dde9)
Dec 16 14:18:18.921: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-4461/dns-test-878422c7-8fab-4d72-b232-585ed988dde9: the server could not find the requested resource (get pods dns-test-878422c7-8fab-4d72-b232-585ed988dde9)
Dec 16 14:18:18.934: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-4461/dns-test-878422c7-8fab-4d72-b232-585ed988dde9: the server could not find the requested resource (get pods dns-test-878422c7-8fab-4d72-b232-585ed988dde9)
Dec 16 14:18:18.942: INFO: Unable to read jessie_udp@PodARecord from pod dns-4461/dns-test-878422c7-8fab-4d72-b232-585ed988dde9: the server could not find the requested resource (get pods dns-test-878422c7-8fab-4d72-b232-585ed988dde9)
Dec 16 14:18:18.949: INFO: Unable to read jessie_tcp@PodARecord from pod dns-4461/dns-test-878422c7-8fab-4d72-b232-585ed988dde9: the server could not find the requested resource (get pods dns-test-878422c7-8fab-4d72-b232-585ed988dde9)
Dec 16 14:18:18.949: INFO: Lookups using dns-4461/dns-test-878422c7-8fab-4d72-b232-585ed988dde9 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Dec 16 14:18:24.022: INFO: DNS probes using dns-4461/dns-test-878422c7-8fab-4d72-b232-585ed988dde9 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:18:24.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4461" for this suite.
Dec 16 14:18:30.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:18:30.298: INFO: namespace dns-4461 deletion completed in 6.151089219s

• [SLOW TEST:25.614 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:18:30.298: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Dec 16 14:18:30.378: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:18:30.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8825" for this suite.
Dec 16 14:18:36.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:18:36.759: INFO: namespace kubectl-8825 deletion completed in 6.206152318s

• [SLOW TEST:6.461 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:18:36.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:19:37.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9300" for this suite.
Dec 16 14:19:59.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:19:59.185: INFO: namespace container-probe-9300 deletion completed in 22.124632922s

• [SLOW TEST:82.424 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:19:59.186: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:20:07.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-3376" for this suite.
Dec 16 14:20:13.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:20:13.922: INFO: namespace emptydir-wrapper-3376 deletion completed in 6.43360917s

• [SLOW TEST:14.736 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:20:13.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 16 14:20:14.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-7636'
Dec 16 14:20:16.260: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 16 14:20:16.260: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Dec 16 14:20:16.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-7636'
Dec 16 14:20:16.580: INFO: stderr: ""
Dec 16 14:20:16.580: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:20:16.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7636" for this suite.
Dec 16 14:20:38.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:20:38.738: INFO: namespace kubectl-7636 deletion completed in 22.150594735s

• [SLOW TEST:24.814 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:20:38.740: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 16 14:20:54.987: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 16 14:20:55.025: INFO: Pod pod-with-prestop-http-hook still exists
Dec 16 14:20:57.025: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 16 14:20:57.032: INFO: Pod pod-with-prestop-http-hook still exists
Dec 16 14:20:59.025: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 16 14:20:59.032: INFO: Pod pod-with-prestop-http-hook still exists
Dec 16 14:21:01.025: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 16 14:21:01.035: INFO: Pod pod-with-prestop-http-hook still exists
Dec 16 14:21:03.025: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 16 14:21:03.032: INFO: Pod pod-with-prestop-http-hook still exists
Dec 16 14:21:05.025: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 16 14:21:05.033: INFO: Pod pod-with-prestop-http-hook still exists
Dec 16 14:21:07.025: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 16 14:21:07.033: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:21:07.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4907" for this suite.
Dec 16 14:21:29.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:21:29.234: INFO: namespace container-lifecycle-hook-4907 deletion completed in 22.162044501s

• [SLOW TEST:50.493 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:21:29.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 16 14:21:29.387: INFO: Pod name rollover-pod: Found 0 pods out of 1
Dec 16 14:21:34.399: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 16 14:21:38.413: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Dec 16 14:21:40.424: INFO: Creating deployment "test-rollover-deployment"
Dec 16 14:21:40.445: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Dec 16 14:21:42.460: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Dec 16 14:21:42.469: INFO: Ensure that both replica sets have 1 created replica
Dec 16 14:21:42.473: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Dec 16 14:21:42.480: INFO: Updating deployment test-rollover-deployment
Dec 16 14:21:42.481: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Dec 16 14:21:44.499: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Dec 16 14:21:44.513: INFO: Make sure deployment "test-rollover-deployment" is complete
Dec 16 14:21:44.521: INFO: all replica sets need to contain the pod-template-hash label
Dec 16 14:21:44.521: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712102900, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712102900, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712102902, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712102900, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 16 14:21:46.569: INFO: all replica sets need to contain the pod-template-hash label
Dec 16 14:21:46.570: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712102900, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712102900, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712102902, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712102900, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 16 14:21:48.544: INFO: all replica sets need to contain the pod-template-hash label
Dec 16 14:21:48.544: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712102900, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712102900, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712102902, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712102900, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 16 14:21:50.537: INFO: all replica sets need to contain the pod-template-hash label
Dec 16 14:21:50.537: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712102900, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712102900, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712102902, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712102900, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 16 14:21:52.538: INFO: all replica sets need to contain the pod-template-hash label
Dec 16 14:21:52.538: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712102900, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712102900, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712102911, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712102900, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 16 14:21:54.539: INFO: all replica sets need to contain the pod-template-hash label
Dec 16 14:21:54.540: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712102900, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712102900, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712102911, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712102900, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 16 14:21:56.545: INFO: all replica sets need to contain the pod-template-hash label
Dec 16 14:21:56.545: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712102900, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712102900, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712102911, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712102900, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 16 14:21:58.543: INFO: all replica sets need to contain the pod-template-hash label
Dec 16 14:21:58.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712102900, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712102900, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712102911, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712102900, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 16 14:22:00.543: INFO: all replica sets need to contain the pod-template-hash label
Dec 16 14:22:00.544: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712102900, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712102900, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712102911, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712102900, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 16 14:22:02.541: INFO: 
Dec 16 14:22:02.542: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 16 14:22:02.556: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-5212,SelfLink:/apis/apps/v1/namespaces/deployment-5212/deployments/test-rollover-deployment,UID:e67359ef-d1f0-4393-b97e-60798e19f76f,ResourceVersion:16897367,Generation:2,CreationTimestamp:2019-12-16 14:21:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-16 14:21:40 +0000 UTC 2019-12-16 14:21:40 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-16 14:22:01 +0000 UTC 2019-12-16 14:21:40 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Dec 16 14:22:02.561: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-5212,SelfLink:/apis/apps/v1/namespaces/deployment-5212/replicasets/test-rollover-deployment-854595fc44,UID:13f8a761-2c2d-46ad-b6db-af6552eb0a2b,ResourceVersion:16897356,Generation:2,CreationTimestamp:2019-12-16 14:21:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment e67359ef-d1f0-4393-b97e-60798e19f76f 0xc00232fd07 0xc00232fd08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 16 14:22:02.561: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Dec 16 14:22:02.561: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-5212,SelfLink:/apis/apps/v1/namespaces/deployment-5212/replicasets/test-rollover-controller,UID:de9f7e60-af76-4686-828e-9ffaf6fec028,ResourceVersion:16897365,Generation:2,CreationTimestamp:2019-12-16 14:21:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment e67359ef-d1f0-4393-b97e-60798e19f76f 0xc00232fc1f 0xc00232fc30}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 16 14:22:02.562: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-5212,SelfLink:/apis/apps/v1/namespaces/deployment-5212/replicasets/test-rollover-deployment-9b8b997cf,UID:abc79485-b08e-45a2-9d8f-e5108dbc9369,ResourceVersion:16897317,Generation:2,CreationTimestamp:2019-12-16 14:21:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment e67359ef-d1f0-4393-b97e-60798e19f76f 0xc00232fde0 0xc00232fde1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 16 14:22:02.567: INFO: Pod "test-rollover-deployment-854595fc44-gc66h" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-gc66h,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-5212,SelfLink:/api/v1/namespaces/deployment-5212/pods/test-rollover-deployment-854595fc44-gc66h,UID:0c7f4f38-3fd0-4745-9580-4249ce5c4938,ResourceVersion:16897340,Generation:0,CreationTimestamp:2019-12-16 14:21:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 13f8a761-2c2d-46ad-b6db-af6552eb0a2b 0xc003150a77 0xc003150a78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t4cdk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t4cdk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-t4cdk true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003150b00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003150b20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:21:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:21:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:21:51 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:21:42 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2019-12-16 14:21:42 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-16 14:21:50 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://55aeffb6761267bb267da979112b900557862dbf28902972e4291ae9ed3af4cb}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:22:02.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5212" for this suite.
Dec 16 14:22:08.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:22:08.718: INFO: namespace deployment-5212 deletion completed in 6.146838732s

• [SLOW TEST:39.484 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:22:08.719: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 16 14:22:08.829: INFO: Waiting up to 5m0s for pod "pod-0244e427-a6bd-451d-a648-ad90c90b6e4a" in namespace "emptydir-5799" to be "success or failure"
Dec 16 14:22:08.907: INFO: Pod "pod-0244e427-a6bd-451d-a648-ad90c90b6e4a": Phase="Pending", Reason="", readiness=false. Elapsed: 77.538234ms
Dec 16 14:22:10.918: INFO: Pod "pod-0244e427-a6bd-451d-a648-ad90c90b6e4a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088427015s
Dec 16 14:22:13.296: INFO: Pod "pod-0244e427-a6bd-451d-a648-ad90c90b6e4a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.466364439s
Dec 16 14:22:15.311: INFO: Pod "pod-0244e427-a6bd-451d-a648-ad90c90b6e4a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.481327631s
Dec 16 14:22:17.317: INFO: Pod "pod-0244e427-a6bd-451d-a648-ad90c90b6e4a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.487934614s
Dec 16 14:22:19.325: INFO: Pod "pod-0244e427-a6bd-451d-a648-ad90c90b6e4a": Phase="Running", Reason="", readiness=true. Elapsed: 10.495969156s
Dec 16 14:22:21.338: INFO: Pod "pod-0244e427-a6bd-451d-a648-ad90c90b6e4a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.508684078s
STEP: Saw pod success
Dec 16 14:22:21.338: INFO: Pod "pod-0244e427-a6bd-451d-a648-ad90c90b6e4a" satisfied condition "success or failure"
Dec 16 14:22:21.344: INFO: Trying to get logs from node iruya-node pod pod-0244e427-a6bd-451d-a648-ad90c90b6e4a container test-container: 
STEP: delete the pod
Dec 16 14:22:21.582: INFO: Waiting for pod pod-0244e427-a6bd-451d-a648-ad90c90b6e4a to disappear
Dec 16 14:22:21.589: INFO: Pod pod-0244e427-a6bd-451d-a648-ad90c90b6e4a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:22:21.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5799" for this suite.
Dec 16 14:22:27.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:22:27.817: INFO: namespace emptydir-5799 deletion completed in 6.221995336s

• [SLOW TEST:19.098 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:22:27.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Dec 16 14:22:27.889: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 16 14:22:27.933: INFO: Waiting for terminating namespaces to be deleted...
Dec 16 14:22:27.938: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Dec 16 14:22:27.953: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Dec 16 14:22:27.953: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 16 14:22:27.953: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Dec 16 14:22:27.953: INFO: 	Container weave ready: true, restart count 0
Dec 16 14:22:27.953: INFO: 	Container weave-npc ready: true, restart count 0
Dec 16 14:22:27.953: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Dec 16 14:22:27.965: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 16 14:22:27.965: INFO: 	Container coredns ready: true, restart count 0
Dec 16 14:22:27.965: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Dec 16 14:22:27.965: INFO: 	Container etcd ready: true, restart count 0
Dec 16 14:22:27.965: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Dec 16 14:22:27.965: INFO: 	Container weave ready: true, restart count 0
Dec 16 14:22:27.965: INFO: 	Container weave-npc ready: true, restart count 0
Dec 16 14:22:27.965: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Dec 16 14:22:27.965: INFO: 	Container kube-controller-manager ready: true, restart count 10
Dec 16 14:22:27.965: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Dec 16 14:22:27.965: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 16 14:22:27.965: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Dec 16 14:22:27.965: INFO: 	Container kube-apiserver ready: true, restart count 0
Dec 16 14:22:27.965: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Dec 16 14:22:27.965: INFO: 	Container kube-scheduler ready: true, restart count 7
Dec 16 14:22:27.965: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 16 14:22:27.965: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-41fe581d-e912-4c80-9ab0-5c46131983ac 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-41fe581d-e912-4c80-9ab0-5c46131983ac off the node iruya-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-41fe581d-e912-4c80-9ab0-5c46131983ac
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:22:44.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-8399" for this suite.
Dec 16 14:22:58.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:22:58.504: INFO: namespace sched-pred-8399 deletion completed in 14.224484594s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:30.687 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:22:58.505: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Dec 16 14:22:58.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6105'
Dec 16 14:22:59.005: INFO: stderr: ""
Dec 16 14:22:59.005: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 16 14:22:59.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6105'
Dec 16 14:22:59.239: INFO: stderr: ""
Dec 16 14:22:59.239: INFO: stdout: "update-demo-nautilus-mxbvd update-demo-nautilus-zr76k "
Dec 16 14:22:59.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mxbvd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6105'
Dec 16 14:22:59.452: INFO: stderr: ""
Dec 16 14:22:59.453: INFO: stdout: ""
Dec 16 14:22:59.453: INFO: update-demo-nautilus-mxbvd is created but not running
Dec 16 14:23:04.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6105'
Dec 16 14:23:06.045: INFO: stderr: ""
Dec 16 14:23:06.045: INFO: stdout: "update-demo-nautilus-mxbvd update-demo-nautilus-zr76k "
Dec 16 14:23:06.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mxbvd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6105'
Dec 16 14:23:06.568: INFO: stderr: ""
Dec 16 14:23:06.568: INFO: stdout: ""
Dec 16 14:23:06.568: INFO: update-demo-nautilus-mxbvd is created but not running
Dec 16 14:23:11.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6105'
Dec 16 14:23:11.737: INFO: stderr: ""
Dec 16 14:23:11.737: INFO: stdout: "update-demo-nautilus-mxbvd update-demo-nautilus-zr76k "
Dec 16 14:23:11.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mxbvd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6105'
Dec 16 14:23:11.907: INFO: stderr: ""
Dec 16 14:23:11.908: INFO: stdout: "true"
Dec 16 14:23:11.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mxbvd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6105'
Dec 16 14:23:12.039: INFO: stderr: ""
Dec 16 14:23:12.039: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 16 14:23:12.040: INFO: validating pod update-demo-nautilus-mxbvd
Dec 16 14:23:12.053: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 16 14:23:12.053: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 16 14:23:12.053: INFO: update-demo-nautilus-mxbvd is verified up and running
Dec 16 14:23:12.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zr76k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6105'
Dec 16 14:23:12.228: INFO: stderr: ""
Dec 16 14:23:12.228: INFO: stdout: "true"
Dec 16 14:23:12.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zr76k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6105'
Dec 16 14:23:12.339: INFO: stderr: ""
Dec 16 14:23:12.339: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 16 14:23:12.339: INFO: validating pod update-demo-nautilus-zr76k
Dec 16 14:23:12.374: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 16 14:23:12.374: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 16 14:23:12.374: INFO: update-demo-nautilus-zr76k is verified up and running
STEP: rolling-update to new replication controller
Dec 16 14:23:12.377: INFO: scanned /root for discovery docs: 
Dec 16 14:23:12.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-6105'
Dec 16 14:23:45.705: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 16 14:23:45.705: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 16 14:23:45.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6105'
Dec 16 14:23:45.841: INFO: stderr: ""
Dec 16 14:23:45.841: INFO: stdout: "update-demo-kitten-f92z7 update-demo-kitten-m5kx9 "
Dec 16 14:23:45.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-f92z7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6105'
Dec 16 14:23:46.001: INFO: stderr: ""
Dec 16 14:23:46.001: INFO: stdout: "true"
Dec 16 14:23:46.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-f92z7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6105'
Dec 16 14:23:46.153: INFO: stderr: ""
Dec 16 14:23:46.154: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 16 14:23:46.154: INFO: validating pod update-demo-kitten-f92z7
Dec 16 14:23:46.178: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 16 14:23:46.178: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 16 14:23:46.178: INFO: update-demo-kitten-f92z7 is verified up and running
Dec 16 14:23:46.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-m5kx9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6105'
Dec 16 14:23:46.318: INFO: stderr: ""
Dec 16 14:23:46.318: INFO: stdout: "true"
Dec 16 14:23:46.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-m5kx9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6105'
Dec 16 14:23:46.484: INFO: stderr: ""
Dec 16 14:23:46.484: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 16 14:23:46.484: INFO: validating pod update-demo-kitten-m5kx9
Dec 16 14:23:46.512: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 16 14:23:46.512: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 16 14:23:46.512: INFO: update-demo-kitten-m5kx9 is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:23:46.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6105" for this suite.
Dec 16 14:24:12.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:24:12.713: INFO: namespace kubectl-6105 deletion completed in 26.196295495s

• [SLOW TEST:74.208 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:24:12.714: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-1933577a-6850-4f83-9157-4652169a458e
STEP: Creating a pod to test consume secrets
Dec 16 14:24:12.857: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-28d5d907-38b6-4481-ad98-a7e9977c8392" in namespace "projected-163" to be "success or failure"
Dec 16 14:24:12.904: INFO: Pod "pod-projected-secrets-28d5d907-38b6-4481-ad98-a7e9977c8392": Phase="Pending", Reason="", readiness=false. Elapsed: 47.105297ms
Dec 16 14:24:14.911: INFO: Pod "pod-projected-secrets-28d5d907-38b6-4481-ad98-a7e9977c8392": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053727891s
Dec 16 14:24:16.925: INFO: Pod "pod-projected-secrets-28d5d907-38b6-4481-ad98-a7e9977c8392": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067901337s
Dec 16 14:24:18.942: INFO: Pod "pod-projected-secrets-28d5d907-38b6-4481-ad98-a7e9977c8392": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084787122s
Dec 16 14:24:20.953: INFO: Pod "pod-projected-secrets-28d5d907-38b6-4481-ad98-a7e9977c8392": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.095169941s
STEP: Saw pod success
Dec 16 14:24:20.953: INFO: Pod "pod-projected-secrets-28d5d907-38b6-4481-ad98-a7e9977c8392" satisfied condition "success or failure"
Dec 16 14:24:20.959: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-28d5d907-38b6-4481-ad98-a7e9977c8392 container projected-secret-volume-test: 
STEP: delete the pod
Dec 16 14:24:21.013: INFO: Waiting for pod pod-projected-secrets-28d5d907-38b6-4481-ad98-a7e9977c8392 to disappear
Dec 16 14:24:21.023: INFO: Pod pod-projected-secrets-28d5d907-38b6-4481-ad98-a7e9977c8392 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:24:21.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-163" for this suite.
Dec 16 14:24:29.123: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:24:29.286: INFO: namespace projected-163 deletion completed in 8.252809677s

• [SLOW TEST:16.572 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:24:29.287: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-78, will wait for the garbage collector to delete the pods
Dec 16 14:24:39.437: INFO: Deleting Job.batch foo took: 16.47818ms
Dec 16 14:24:39.738: INFO: Terminating Job.batch foo pods took: 300.780225ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:25:26.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-78" for this suite.
Dec 16 14:25:32.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:25:32.834: INFO: namespace job-78 deletion completed in 6.175091739s

• [SLOW TEST:63.547 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:25:32.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Dec 16 14:25:32.934: INFO: PodSpec: initContainers in spec.initContainers
Dec 16 14:26:36.151: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-bada9474-be4e-4087-8cfc-103c58b4a515", GenerateName:"", Namespace:"init-container-9427", SelfLink:"/api/v1/namespaces/init-container-9427/pods/pod-init-bada9474-be4e-4087-8cfc-103c58b4a515", UID:"63eae11a-1e0c-4450-bda3-edada6b5425c", ResourceVersion:"16898057", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63712103132, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"934247873"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-z6zk7", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0016130c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-z6zk7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-z6zk7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-z6zk7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001503a38), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001cf4360), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001503b00)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001503b70)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001503b78), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001503b7c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712103133, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712103133, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712103133, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712103132, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc001c599e0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0020e6230)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0020e62a0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://5a8a193bb16198a2bf2c88903c675d12d20f6a1072e11ba57a8b2ceeefdfcf5d"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001c59a40), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001c59a20), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:26:36.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9427" for this suite.
Dec 16 14:26:58.218: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:26:58.340: INFO: namespace init-container-9427 deletion completed in 22.154367696s

• [SLOW TEST:85.506 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:26:58.340: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9451.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9451.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9451.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9451.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9451.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9451.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9451.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9451.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9451.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9451.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9451.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9451.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9451.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 207.53.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.53.207_udp@PTR;check="$$(dig +tcp +noall +answer +search 207.53.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.53.207_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9451.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9451.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9451.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9451.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9451.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9451.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9451.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9451.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9451.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9451.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9451.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9451.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9451.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 207.53.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.53.207_udp@PTR;check="$$(dig +tcp +noall +answer +search 207.53.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.53.207_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 16 14:27:10.708: INFO: Unable to read wheezy_udp@dns-test-service.dns-9451.svc.cluster.local from pod dns-9451/dns-test-5e0d1a92-a19b-46d0-b9e0-146cce1bd63d: the server could not find the requested resource (get pods dns-test-5e0d1a92-a19b-46d0-b9e0-146cce1bd63d)
Dec 16 14:27:10.717: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9451.svc.cluster.local from pod dns-9451/dns-test-5e0d1a92-a19b-46d0-b9e0-146cce1bd63d: the server could not find the requested resource (get pods dns-test-5e0d1a92-a19b-46d0-b9e0-146cce1bd63d)
Dec 16 14:27:10.726: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9451.svc.cluster.local from pod dns-9451/dns-test-5e0d1a92-a19b-46d0-b9e0-146cce1bd63d: the server could not find the requested resource (get pods dns-test-5e0d1a92-a19b-46d0-b9e0-146cce1bd63d)
Dec 16 14:27:10.733: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9451.svc.cluster.local from pod dns-9451/dns-test-5e0d1a92-a19b-46d0-b9e0-146cce1bd63d: the server could not find the requested resource (get pods dns-test-5e0d1a92-a19b-46d0-b9e0-146cce1bd63d)
Dec 16 14:27:10.737: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-9451.svc.cluster.local from pod dns-9451/dns-test-5e0d1a92-a19b-46d0-b9e0-146cce1bd63d: the server could not find the requested resource (get pods dns-test-5e0d1a92-a19b-46d0-b9e0-146cce1bd63d)
Dec 16 14:27:10.743: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-9451.svc.cluster.local from pod dns-9451/dns-test-5e0d1a92-a19b-46d0-b9e0-146cce1bd63d: the server could not find the requested resource (get pods dns-test-5e0d1a92-a19b-46d0-b9e0-146cce1bd63d)
Dec 16 14:27:10.748: INFO: Unable to read wheezy_udp@PodARecord from pod dns-9451/dns-test-5e0d1a92-a19b-46d0-b9e0-146cce1bd63d: the server could not find the requested resource (get pods dns-test-5e0d1a92-a19b-46d0-b9e0-146cce1bd63d)
Dec 16 14:27:10.757: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-9451/dns-test-5e0d1a92-a19b-46d0-b9e0-146cce1bd63d: the server could not find the requested resource (get pods dns-test-5e0d1a92-a19b-46d0-b9e0-146cce1bd63d)
Dec 16 14:27:10.765: INFO: Unable to read 10.107.53.207_udp@PTR from pod dns-9451/dns-test-5e0d1a92-a19b-46d0-b9e0-146cce1bd63d: the server could not find the requested resource (get pods dns-test-5e0d1a92-a19b-46d0-b9e0-146cce1bd63d)
Dec 16 14:27:10.770: INFO: Unable to read 10.107.53.207_tcp@PTR from pod dns-9451/dns-test-5e0d1a92-a19b-46d0-b9e0-146cce1bd63d: the server could not find the requested resource (get pods dns-test-5e0d1a92-a19b-46d0-b9e0-146cce1bd63d)
Dec 16 14:27:10.775: INFO: Unable to read jessie_udp@dns-test-service.dns-9451.svc.cluster.local from pod dns-9451/dns-test-5e0d1a92-a19b-46d0-b9e0-146cce1bd63d: the server could not find the requested resource (get pods dns-test-5e0d1a92-a19b-46d0-b9e0-146cce1bd63d)
Dec 16 14:27:10.778: INFO: Unable to read jessie_tcp@dns-test-service.dns-9451.svc.cluster.local from pod dns-9451/dns-test-5e0d1a92-a19b-46d0-b9e0-146cce1bd63d: the server could not find the requested resource (get pods dns-test-5e0d1a92-a19b-46d0-b9e0-146cce1bd63d)
Dec 16 14:27:10.781: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9451.svc.cluster.local from pod dns-9451/dns-test-5e0d1a92-a19b-46d0-b9e0-146cce1bd63d: the server could not find the requested resource (get pods dns-test-5e0d1a92-a19b-46d0-b9e0-146cce1bd63d)
Dec 16 14:27:10.786: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9451.svc.cluster.local from pod dns-9451/dns-test-5e0d1a92-a19b-46d0-b9e0-146cce1bd63d: the server could not find the requested resource (get pods dns-test-5e0d1a92-a19b-46d0-b9e0-146cce1bd63d)
Dec 16 14:27:10.791: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-9451.svc.cluster.local from pod dns-9451/dns-test-5e0d1a92-a19b-46d0-b9e0-146cce1bd63d: the server could not find the requested resource (get pods dns-test-5e0d1a92-a19b-46d0-b9e0-146cce1bd63d)
Dec 16 14:27:10.795: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-9451.svc.cluster.local from pod dns-9451/dns-test-5e0d1a92-a19b-46d0-b9e0-146cce1bd63d: the server could not find the requested resource (get pods dns-test-5e0d1a92-a19b-46d0-b9e0-146cce1bd63d)
Dec 16 14:27:10.798: INFO: Unable to read jessie_udp@PodARecord from pod dns-9451/dns-test-5e0d1a92-a19b-46d0-b9e0-146cce1bd63d: the server could not find the requested resource (get pods dns-test-5e0d1a92-a19b-46d0-b9e0-146cce1bd63d)
Dec 16 14:27:10.801: INFO: Unable to read jessie_tcp@PodARecord from pod dns-9451/dns-test-5e0d1a92-a19b-46d0-b9e0-146cce1bd63d: the server could not find the requested resource (get pods dns-test-5e0d1a92-a19b-46d0-b9e0-146cce1bd63d)
Dec 16 14:27:10.805: INFO: Unable to read 10.107.53.207_udp@PTR from pod dns-9451/dns-test-5e0d1a92-a19b-46d0-b9e0-146cce1bd63d: the server could not find the requested resource (get pods dns-test-5e0d1a92-a19b-46d0-b9e0-146cce1bd63d)
Dec 16 14:27:10.810: INFO: Unable to read 10.107.53.207_tcp@PTR from pod dns-9451/dns-test-5e0d1a92-a19b-46d0-b9e0-146cce1bd63d: the server could not find the requested resource (get pods dns-test-5e0d1a92-a19b-46d0-b9e0-146cce1bd63d)
Dec 16 14:27:10.810: INFO: Lookups using dns-9451/dns-test-5e0d1a92-a19b-46d0-b9e0-146cce1bd63d failed for: [wheezy_udp@dns-test-service.dns-9451.svc.cluster.local wheezy_tcp@dns-test-service.dns-9451.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9451.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9451.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-9451.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-9451.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.107.53.207_udp@PTR 10.107.53.207_tcp@PTR jessie_udp@dns-test-service.dns-9451.svc.cluster.local jessie_tcp@dns-test-service.dns-9451.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9451.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9451.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-9451.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-9451.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.107.53.207_udp@PTR 10.107.53.207_tcp@PTR]

Dec 16 14:27:16.212: INFO: DNS probes using dns-9451/dns-test-5e0d1a92-a19b-46d0-b9e0-146cce1bd63d succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:27:16.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9451" for this suite.
Dec 16 14:27:22.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:27:22.845: INFO: namespace dns-9451 deletion completed in 6.161691449s

• [SLOW TEST:24.506 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:27:22.846: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 16 14:27:22.999: INFO: Waiting up to 5m0s for pod "pod-7e2e9b09-9f26-4a86-a489-7dc7e67d3294" in namespace "emptydir-4914" to be "success or failure"
Dec 16 14:27:23.016: INFO: Pod "pod-7e2e9b09-9f26-4a86-a489-7dc7e67d3294": Phase="Pending", Reason="", readiness=false. Elapsed: 16.087287ms
Dec 16 14:27:25.023: INFO: Pod "pod-7e2e9b09-9f26-4a86-a489-7dc7e67d3294": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023129278s
Dec 16 14:27:27.034: INFO: Pod "pod-7e2e9b09-9f26-4a86-a489-7dc7e67d3294": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034407179s
Dec 16 14:27:29.040: INFO: Pod "pod-7e2e9b09-9f26-4a86-a489-7dc7e67d3294": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040085684s
Dec 16 14:27:31.045: INFO: Pod "pod-7e2e9b09-9f26-4a86-a489-7dc7e67d3294": Phase="Pending", Reason="", readiness=false. Elapsed: 8.045351353s
Dec 16 14:27:33.054: INFO: Pod "pod-7e2e9b09-9f26-4a86-a489-7dc7e67d3294": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.054184894s
STEP: Saw pod success
Dec 16 14:27:33.054: INFO: Pod "pod-7e2e9b09-9f26-4a86-a489-7dc7e67d3294" satisfied condition "success or failure"
Dec 16 14:27:33.059: INFO: Trying to get logs from node iruya-node pod pod-7e2e9b09-9f26-4a86-a489-7dc7e67d3294 container test-container: 
STEP: delete the pod
Dec 16 14:27:33.313: INFO: Waiting for pod pod-7e2e9b09-9f26-4a86-a489-7dc7e67d3294 to disappear
Dec 16 14:27:33.363: INFO: Pod pod-7e2e9b09-9f26-4a86-a489-7dc7e67d3294 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:27:33.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4914" for this suite.
Dec 16 14:27:39.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:27:39.539: INFO: namespace emptydir-4914 deletion completed in 6.166593464s

• [SLOW TEST:16.693 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:27:39.539: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 16 14:27:39.603: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e77f5dec-84c1-4248-986a-496abb0ba8ce" in namespace "downward-api-7887" to be "success or failure"
Dec 16 14:27:39.698: INFO: Pod "downwardapi-volume-e77f5dec-84c1-4248-986a-496abb0ba8ce": Phase="Pending", Reason="", readiness=false. Elapsed: 94.595996ms
Dec 16 14:27:41.708: INFO: Pod "downwardapi-volume-e77f5dec-84c1-4248-986a-496abb0ba8ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104750077s
Dec 16 14:27:43.715: INFO: Pod "downwardapi-volume-e77f5dec-84c1-4248-986a-496abb0ba8ce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111424167s
Dec 16 14:27:45.726: INFO: Pod "downwardapi-volume-e77f5dec-84c1-4248-986a-496abb0ba8ce": Phase="Pending", Reason="", readiness=false. Elapsed: 6.122771924s
Dec 16 14:27:47.736: INFO: Pod "downwardapi-volume-e77f5dec-84c1-4248-986a-496abb0ba8ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.132561611s
STEP: Saw pod success
Dec 16 14:27:47.736: INFO: Pod "downwardapi-volume-e77f5dec-84c1-4248-986a-496abb0ba8ce" satisfied condition "success or failure"
Dec 16 14:27:47.740: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-e77f5dec-84c1-4248-986a-496abb0ba8ce container client-container: 
STEP: delete the pod
Dec 16 14:27:48.175: INFO: Waiting for pod downwardapi-volume-e77f5dec-84c1-4248-986a-496abb0ba8ce to disappear
Dec 16 14:27:48.183: INFO: Pod downwardapi-volume-e77f5dec-84c1-4248-986a-496abb0ba8ce no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:27:48.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7887" for this suite.
Dec 16 14:27:54.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:27:54.681: INFO: namespace downward-api-7887 deletion completed in 6.4868134s

• [SLOW TEST:15.142 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:27:54.682: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 16 14:27:54.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-337'
Dec 16 14:27:54.923: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 16 14:27:54.923: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Dec 16 14:27:57.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-337'
Dec 16 14:27:58.184: INFO: stderr: ""
Dec 16 14:27:58.184: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:27:58.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-337" for this suite.
Dec 16 14:28:04.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:28:04.329: INFO: namespace kubectl-337 deletion completed in 6.13844044s

• [SLOW TEST:9.647 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:28:04.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Dec 16 14:28:04.451: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 16 14:28:04.475: INFO: Waiting for terminating namespaces to be deleted...
Dec 16 14:28:04.478: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Dec 16 14:28:04.496: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Dec 16 14:28:04.497: INFO: 	Container weave ready: true, restart count 0
Dec 16 14:28:04.497: INFO: 	Container weave-npc ready: true, restart count 0
Dec 16 14:28:04.497: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Dec 16 14:28:04.497: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 16 14:28:04.497: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Dec 16 14:28:04.513: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Dec 16 14:28:04.513: INFO: 	Container etcd ready: true, restart count 0
Dec 16 14:28:04.513: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Dec 16 14:28:04.513: INFO: 	Container weave ready: true, restart count 0
Dec 16 14:28:04.513: INFO: 	Container weave-npc ready: true, restart count 0
Dec 16 14:28:04.513: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 16 14:28:04.513: INFO: 	Container coredns ready: true, restart count 0
Dec 16 14:28:04.513: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Dec 16 14:28:04.513: INFO: 	Container kube-controller-manager ready: true, restart count 10
Dec 16 14:28:04.513: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Dec 16 14:28:04.513: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 16 14:28:04.513: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Dec 16 14:28:04.513: INFO: 	Container kube-apiserver ready: true, restart count 0
Dec 16 14:28:04.513: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Dec 16 14:28:04.513: INFO: 	Container kube-scheduler ready: true, restart count 7
Dec 16 14:28:04.513: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 16 14:28:04.513: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15e0e04b85838b45], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:28:05.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2641" for this suite.
Dec 16 14:28:11.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:28:11.749: INFO: namespace sched-pred-2641 deletion completed in 6.19205617s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.420 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:28:11.750: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Dec 16 14:28:11.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7352'
Dec 16 14:28:12.345: INFO: stderr: ""
Dec 16 14:28:12.345: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 16 14:28:12.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7352'
Dec 16 14:28:12.697: INFO: stderr: ""
Dec 16 14:28:12.697: INFO: stdout: "update-demo-nautilus-g87z5 update-demo-nautilus-vzxlf "
Dec 16 14:28:12.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g87z5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7352'
Dec 16 14:28:12.809: INFO: stderr: ""
Dec 16 14:28:12.809: INFO: stdout: ""
Dec 16 14:28:12.809: INFO: update-demo-nautilus-g87z5 is created but not running
Dec 16 14:28:17.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7352'
Dec 16 14:28:18.902: INFO: stderr: ""
Dec 16 14:28:18.903: INFO: stdout: "update-demo-nautilus-g87z5 update-demo-nautilus-vzxlf "
Dec 16 14:28:18.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g87z5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7352'
Dec 16 14:28:19.446: INFO: stderr: ""
Dec 16 14:28:19.447: INFO: stdout: ""
Dec 16 14:28:19.447: INFO: update-demo-nautilus-g87z5 is created but not running
Dec 16 14:28:24.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7352'
Dec 16 14:28:24.616: INFO: stderr: ""
Dec 16 14:28:24.616: INFO: stdout: "update-demo-nautilus-g87z5 update-demo-nautilus-vzxlf "
Dec 16 14:28:24.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g87z5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7352'
Dec 16 14:28:24.778: INFO: stderr: ""
Dec 16 14:28:24.778: INFO: stdout: "true"
Dec 16 14:28:24.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g87z5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7352'
Dec 16 14:28:24.909: INFO: stderr: ""
Dec 16 14:28:24.909: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 16 14:28:24.909: INFO: validating pod update-demo-nautilus-g87z5
Dec 16 14:28:24.926: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 16 14:28:24.926: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 16 14:28:24.926: INFO: update-demo-nautilus-g87z5 is verified up and running
Dec 16 14:28:24.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vzxlf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7352'
Dec 16 14:28:25.056: INFO: stderr: ""
Dec 16 14:28:25.056: INFO: stdout: "true"
Dec 16 14:28:25.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vzxlf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7352'
Dec 16 14:28:25.153: INFO: stderr: ""
Dec 16 14:28:25.153: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 16 14:28:25.153: INFO: validating pod update-demo-nautilus-vzxlf
Dec 16 14:28:25.159: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 16 14:28:25.159: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 16 14:28:25.159: INFO: update-demo-nautilus-vzxlf is verified up and running
STEP: scaling down the replication controller
Dec 16 14:28:25.161: INFO: scanned /root for discovery docs: 
Dec 16 14:28:25.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-7352'
Dec 16 14:28:26.289: INFO: stderr: ""
Dec 16 14:28:26.289: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 16 14:28:26.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7352'
Dec 16 14:28:26.533: INFO: stderr: ""
Dec 16 14:28:26.533: INFO: stdout: "update-demo-nautilus-g87z5 update-demo-nautilus-vzxlf "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 16 14:28:31.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7352'
Dec 16 14:28:31.724: INFO: stderr: ""
Dec 16 14:28:31.724: INFO: stdout: "update-demo-nautilus-g87z5 update-demo-nautilus-vzxlf "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 16 14:28:36.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7352'
Dec 16 14:28:36.895: INFO: stderr: ""
Dec 16 14:28:36.895: INFO: stdout: "update-demo-nautilus-vzxlf "
Dec 16 14:28:36.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vzxlf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7352'
Dec 16 14:28:37.058: INFO: stderr: ""
Dec 16 14:28:37.058: INFO: stdout: "true"
Dec 16 14:28:37.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vzxlf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7352'
Dec 16 14:28:37.158: INFO: stderr: ""
Dec 16 14:28:37.158: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 16 14:28:37.158: INFO: validating pod update-demo-nautilus-vzxlf
Dec 16 14:28:37.163: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 16 14:28:37.163: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 16 14:28:37.163: INFO: update-demo-nautilus-vzxlf is verified up and running
STEP: scaling up the replication controller
Dec 16 14:28:37.165: INFO: scanned /root for discovery docs: 
Dec 16 14:28:37.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-7352'
Dec 16 14:28:38.330: INFO: stderr: ""
Dec 16 14:28:38.330: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 16 14:28:38.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7352'
Dec 16 14:28:38.612: INFO: stderr: ""
Dec 16 14:28:38.612: INFO: stdout: "update-demo-nautilus-ghb5s update-demo-nautilus-vzxlf "
Dec 16 14:28:38.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ghb5s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7352'
Dec 16 14:28:38.742: INFO: stderr: ""
Dec 16 14:28:38.742: INFO: stdout: ""
Dec 16 14:28:38.743: INFO: update-demo-nautilus-ghb5s is created but not running
Dec 16 14:28:43.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7352'
Dec 16 14:28:44.012: INFO: stderr: ""
Dec 16 14:28:44.012: INFO: stdout: "update-demo-nautilus-ghb5s update-demo-nautilus-vzxlf "
Dec 16 14:28:44.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ghb5s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7352'
Dec 16 14:28:44.208: INFO: stderr: ""
Dec 16 14:28:44.208: INFO: stdout: ""
Dec 16 14:28:44.208: INFO: update-demo-nautilus-ghb5s is created but not running
Dec 16 14:28:49.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7352'
Dec 16 14:28:49.403: INFO: stderr: ""
Dec 16 14:28:49.403: INFO: stdout: "update-demo-nautilus-ghb5s update-demo-nautilus-vzxlf "
Dec 16 14:28:49.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ghb5s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7352'
Dec 16 14:28:49.528: INFO: stderr: ""
Dec 16 14:28:49.528: INFO: stdout: "true"
Dec 16 14:28:49.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ghb5s -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7352'
Dec 16 14:28:49.681: INFO: stderr: ""
Dec 16 14:28:49.682: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 16 14:28:49.682: INFO: validating pod update-demo-nautilus-ghb5s
Dec 16 14:28:49.696: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 16 14:28:49.696: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 16 14:28:49.696: INFO: update-demo-nautilus-ghb5s is verified up and running
Dec 16 14:28:49.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vzxlf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7352'
Dec 16 14:28:49.814: INFO: stderr: ""
Dec 16 14:28:49.814: INFO: stdout: "true"
Dec 16 14:28:49.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vzxlf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7352'
Dec 16 14:28:49.919: INFO: stderr: ""
Dec 16 14:28:49.919: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 16 14:28:49.919: INFO: validating pod update-demo-nautilus-vzxlf
Dec 16 14:28:49.925: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 16 14:28:49.925: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 16 14:28:49.925: INFO: update-demo-nautilus-vzxlf is verified up and running
STEP: using delete to clean up resources
Dec 16 14:28:49.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7352'
Dec 16 14:28:50.053: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 16 14:28:50.053: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 16 14:28:50.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7352'
Dec 16 14:28:50.175: INFO: stderr: "No resources found.\n"
Dec 16 14:28:50.175: INFO: stdout: ""
Dec 16 14:28:50.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7352 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 16 14:28:50.323: INFO: stderr: ""
Dec 16 14:28:50.324: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:28:50.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7352" for this suite.
Dec 16 14:29:12.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:29:12.476: INFO: namespace kubectl-7352 deletion completed in 22.137018439s

• [SLOW TEST:60.726 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:29:12.478: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:29:20.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7417" for this suite.
Dec 16 14:30:12.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:30:12.930: INFO: namespace kubelet-test-7417 deletion completed in 52.179022347s

• [SLOW TEST:60.453 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:30:12.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-ee70cc50-674e-42b3-a50a-187bb3c0710e
STEP: Creating a pod to test consume configMaps
Dec 16 14:30:13.076: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e00c74b4-6163-48bd-aae6-4f60897c8149" in namespace "projected-1913" to be "success or failure"
Dec 16 14:30:13.082: INFO: Pod "pod-projected-configmaps-e00c74b4-6163-48bd-aae6-4f60897c8149": Phase="Pending", Reason="", readiness=false. Elapsed: 5.103769ms
Dec 16 14:30:15.087: INFO: Pod "pod-projected-configmaps-e00c74b4-6163-48bd-aae6-4f60897c8149": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010348643s
Dec 16 14:30:17.093: INFO: Pod "pod-projected-configmaps-e00c74b4-6163-48bd-aae6-4f60897c8149": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016079303s
Dec 16 14:30:19.100: INFO: Pod "pod-projected-configmaps-e00c74b4-6163-48bd-aae6-4f60897c8149": Phase="Pending", Reason="", readiness=false. Elapsed: 6.023433048s
Dec 16 14:30:21.117: INFO: Pod "pod-projected-configmaps-e00c74b4-6163-48bd-aae6-4f60897c8149": Phase="Pending", Reason="", readiness=false. Elapsed: 8.040236257s
Dec 16 14:30:23.127: INFO: Pod "pod-projected-configmaps-e00c74b4-6163-48bd-aae6-4f60897c8149": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.050908389s
STEP: Saw pod success
Dec 16 14:30:23.128: INFO: Pod "pod-projected-configmaps-e00c74b4-6163-48bd-aae6-4f60897c8149" satisfied condition "success or failure"
Dec 16 14:30:23.134: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-e00c74b4-6163-48bd-aae6-4f60897c8149 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 16 14:30:23.275: INFO: Waiting for pod pod-projected-configmaps-e00c74b4-6163-48bd-aae6-4f60897c8149 to disappear
Dec 16 14:30:23.285: INFO: Pod pod-projected-configmaps-e00c74b4-6163-48bd-aae6-4f60897c8149 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:30:23.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1913" for this suite.
Dec 16 14:30:29.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:30:29.589: INFO: namespace projected-1913 deletion completed in 6.245020842s

• [SLOW TEST:16.657 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:30:29.591: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-e1a9a8d8-20b8-4f1d-b117-d9d090adfe74
STEP: Creating a pod to test consume configMaps
Dec 16 14:30:29.811: INFO: Waiting up to 5m0s for pod "pod-configmaps-f564a410-d175-4c0c-b91e-0d50f592a0e0" in namespace "configmap-5248" to be "success or failure"
Dec 16 14:30:29.846: INFO: Pod "pod-configmaps-f564a410-d175-4c0c-b91e-0d50f592a0e0": Phase="Pending", Reason="", readiness=false. Elapsed: 34.224406ms
Dec 16 14:30:31.858: INFO: Pod "pod-configmaps-f564a410-d175-4c0c-b91e-0d50f592a0e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046569375s
Dec 16 14:30:33.874: INFO: Pod "pod-configmaps-f564a410-d175-4c0c-b91e-0d50f592a0e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061940808s
Dec 16 14:30:35.882: INFO: Pod "pod-configmaps-f564a410-d175-4c0c-b91e-0d50f592a0e0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070175073s
Dec 16 14:30:37.891: INFO: Pod "pod-configmaps-f564a410-d175-4c0c-b91e-0d50f592a0e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.079655277s
STEP: Saw pod success
Dec 16 14:30:37.892: INFO: Pod "pod-configmaps-f564a410-d175-4c0c-b91e-0d50f592a0e0" satisfied condition "success or failure"
Dec 16 14:30:37.895: INFO: Trying to get logs from node iruya-node pod pod-configmaps-f564a410-d175-4c0c-b91e-0d50f592a0e0 container configmap-volume-test: 
STEP: delete the pod
Dec 16 14:30:37.949: INFO: Waiting for pod pod-configmaps-f564a410-d175-4c0c-b91e-0d50f592a0e0 to disappear
Dec 16 14:30:37.952: INFO: Pod pod-configmaps-f564a410-d175-4c0c-b91e-0d50f592a0e0 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:30:37.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5248" for this suite.
Dec 16 14:30:44.081: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:30:44.194: INFO: namespace configmap-5248 deletion completed in 6.192199091s

• [SLOW TEST:14.604 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:30:44.195: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Dec 16 14:30:44.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-135 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Dec 16 14:30:54.905: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n"
Dec 16 14:30:54.905: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:30:56.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-135" for this suite.
Dec 16 14:31:02.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:31:03.021: INFO: namespace kubectl-135 deletion completed in 6.094772817s

• [SLOW TEST:18.826 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:31:03.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Dec 16 14:31:03.215: INFO: Number of nodes with available pods: 0
Dec 16 14:31:03.216: INFO: Node iruya-node is running more than one daemon pod
Dec 16 14:31:04.811: INFO: Number of nodes with available pods: 0
Dec 16 14:31:04.811: INFO: Node iruya-node is running more than one daemon pod
Dec 16 14:31:05.287: INFO: Number of nodes with available pods: 0
Dec 16 14:31:05.287: INFO: Node iruya-node is running more than one daemon pod
Dec 16 14:31:06.233: INFO: Number of nodes with available pods: 0
Dec 16 14:31:06.233: INFO: Node iruya-node is running more than one daemon pod
Dec 16 14:31:07.225: INFO: Number of nodes with available pods: 0
Dec 16 14:31:07.225: INFO: Node iruya-node is running more than one daemon pod
Dec 16 14:31:09.022: INFO: Number of nodes with available pods: 0
Dec 16 14:31:09.022: INFO: Node iruya-node is running more than one daemon pod
Dec 16 14:31:09.307: INFO: Number of nodes with available pods: 0
Dec 16 14:31:09.308: INFO: Node iruya-node is running more than one daemon pod
Dec 16 14:31:10.232: INFO: Number of nodes with available pods: 0
Dec 16 14:31:10.232: INFO: Node iruya-node is running more than one daemon pod
Dec 16 14:31:11.360: INFO: Number of nodes with available pods: 0
Dec 16 14:31:11.360: INFO: Node iruya-node is running more than one daemon pod
Dec 16 14:31:12.255: INFO: Number of nodes with available pods: 0
Dec 16 14:31:12.255: INFO: Node iruya-node is running more than one daemon pod
Dec 16 14:31:13.280: INFO: Number of nodes with available pods: 1
Dec 16 14:31:13.280: INFO: Node iruya-node is running more than one daemon pod
Dec 16 14:31:14.231: INFO: Number of nodes with available pods: 2
Dec 16 14:31:14.231: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Dec 16 14:31:14.305: INFO: Number of nodes with available pods: 1
Dec 16 14:31:14.305: INFO: Node iruya-node is running more than one daemon pod
Dec 16 14:31:15.322: INFO: Number of nodes with available pods: 1
Dec 16 14:31:15.322: INFO: Node iruya-node is running more than one daemon pod
Dec 16 14:31:16.320: INFO: Number of nodes with available pods: 1
Dec 16 14:31:16.321: INFO: Node iruya-node is running more than one daemon pod
Dec 16 14:31:17.326: INFO: Number of nodes with available pods: 1
Dec 16 14:31:17.326: INFO: Node iruya-node is running more than one daemon pod
Dec 16 14:31:18.324: INFO: Number of nodes with available pods: 1
Dec 16 14:31:18.324: INFO: Node iruya-node is running more than one daemon pod
Dec 16 14:31:19.324: INFO: Number of nodes with available pods: 1
Dec 16 14:31:19.324: INFO: Node iruya-node is running more than one daemon pod
Dec 16 14:31:20.327: INFO: Number of nodes with available pods: 1
Dec 16 14:31:20.328: INFO: Node iruya-node is running more than one daemon pod
Dec 16 14:31:21.346: INFO: Number of nodes with available pods: 1
Dec 16 14:31:21.346: INFO: Node iruya-node is running more than one daemon pod
Dec 16 14:31:22.342: INFO: Number of nodes with available pods: 1
Dec 16 14:31:22.342: INFO: Node iruya-node is running more than one daemon pod
Dec 16 14:31:23.326: INFO: Number of nodes with available pods: 1
Dec 16 14:31:23.326: INFO: Node iruya-node is running more than one daemon pod
Dec 16 14:31:24.328: INFO: Number of nodes with available pods: 1
Dec 16 14:31:24.328: INFO: Node iruya-node is running more than one daemon pod
Dec 16 14:31:25.326: INFO: Number of nodes with available pods: 1
Dec 16 14:31:25.326: INFO: Node iruya-node is running more than one daemon pod
Dec 16 14:31:26.328: INFO: Number of nodes with available pods: 1
Dec 16 14:31:26.328: INFO: Node iruya-node is running more than one daemon pod
Dec 16 14:31:27.318: INFO: Number of nodes with available pods: 1
Dec 16 14:31:27.318: INFO: Node iruya-node is running more than one daemon pod
Dec 16 14:31:28.320: INFO: Number of nodes with available pods: 2
Dec 16 14:31:28.320: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8789, will wait for the garbage collector to delete the pods
Dec 16 14:31:28.389: INFO: Deleting DaemonSet.extensions daemon-set took: 10.404564ms
Dec 16 14:31:28.690: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.815443ms
Dec 16 14:31:37.915: INFO: Number of nodes with available pods: 0
Dec 16 14:31:37.915: INFO: Number of running nodes: 0, number of available pods: 0
Dec 16 14:31:37.922: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8789/daemonsets","resourceVersion":"16898863"},"items":null}

Dec 16 14:31:37.925: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8789/pods","resourceVersion":"16898863"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:31:37.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8789" for this suite.
Dec 16 14:31:43.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:31:44.072: INFO: namespace daemonsets-8789 deletion completed in 6.130676479s

• [SLOW TEST:41.051 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:31:44.073: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Dec 16 14:31:44.274: INFO: Waiting up to 5m0s for pod "pod-1d6e65f5-be0e-4180-86db-d78944ab7b16" in namespace "emptydir-6100" to be "success or failure"
Dec 16 14:31:44.282: INFO: Pod "pod-1d6e65f5-be0e-4180-86db-d78944ab7b16": Phase="Pending", Reason="", readiness=false. Elapsed: 7.507979ms
Dec 16 14:31:46.296: INFO: Pod "pod-1d6e65f5-be0e-4180-86db-d78944ab7b16": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022091096s
Dec 16 14:31:48.307: INFO: Pod "pod-1d6e65f5-be0e-4180-86db-d78944ab7b16": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032307701s
Dec 16 14:31:50.315: INFO: Pod "pod-1d6e65f5-be0e-4180-86db-d78944ab7b16": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040344976s
Dec 16 14:31:52.369: INFO: Pod "pod-1d6e65f5-be0e-4180-86db-d78944ab7b16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.094272205s
STEP: Saw pod success
Dec 16 14:31:52.369: INFO: Pod "pod-1d6e65f5-be0e-4180-86db-d78944ab7b16" satisfied condition "success or failure"
Dec 16 14:31:52.377: INFO: Trying to get logs from node iruya-node pod pod-1d6e65f5-be0e-4180-86db-d78944ab7b16 container test-container: 
STEP: delete the pod
Dec 16 14:31:52.463: INFO: Waiting for pod pod-1d6e65f5-be0e-4180-86db-d78944ab7b16 to disappear
Dec 16 14:31:52.589: INFO: Pod pod-1d6e65f5-be0e-4180-86db-d78944ab7b16 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:31:52.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6100" for this suite.
Dec 16 14:31:58.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:31:58.766: INFO: namespace emptydir-6100 deletion completed in 6.156660723s

• [SLOW TEST:14.693 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:31:58.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 16 14:31:58.974: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"b1eac66d-7ad4-4d47-927c-702e0259034c", Controller:(*bool)(0xc000d27e7a), BlockOwnerDeletion:(*bool)(0xc000d27e7b)}}
Dec 16 14:31:59.086: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"3a66ca2f-cc36-4c2a-af9a-8d0bd45fdfb0", Controller:(*bool)(0xc002ab801a), BlockOwnerDeletion:(*bool)(0xc002ab801b)}}
Dec 16 14:31:59.108: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"ce6362ff-8cd3-43c0-9001-4d3c3735b933", Controller:(*bool)(0xc000ffb2b2), BlockOwnerDeletion:(*bool)(0xc000ffb2b3)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:32:04.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8851" for this suite.
Dec 16 14:32:10.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:32:10.636: INFO: namespace gc-8851 deletion completed in 6.348990236s

• [SLOW TEST:11.870 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:32:10.637: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-89d9b553-c378-4fbc-a211-a32e1d8d8116
STEP: Creating secret with name s-test-opt-upd-91217fab-cff7-4e33-9f2f-4356361aebda
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-89d9b553-c378-4fbc-a211-a32e1d8d8116
STEP: Updating secret s-test-opt-upd-91217fab-cff7-4e33-9f2f-4356361aebda
STEP: Creating secret with name s-test-opt-create-8fce6ead-fbbc-4b17-8f3a-4caf8fa7be54
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:32:27.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3446" for this suite.
Dec 16 14:32:51.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:32:51.399: INFO: namespace secrets-3446 deletion completed in 24.132857136s

• [SLOW TEST:40.763 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:32:51.400: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Dec 16 14:33:11.647: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4813 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 16 14:33:11.647: INFO: >>> kubeConfig: /root/.kube/config
Dec 16 14:33:12.111: INFO: Exec stderr: ""
Dec 16 14:33:12.111: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4813 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 16 14:33:12.111: INFO: >>> kubeConfig: /root/.kube/config
Dec 16 14:33:12.468: INFO: Exec stderr: ""
Dec 16 14:33:12.468: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4813 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 16 14:33:12.469: INFO: >>> kubeConfig: /root/.kube/config
Dec 16 14:33:12.976: INFO: Exec stderr: ""
Dec 16 14:33:12.976: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4813 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 16 14:33:12.976: INFO: >>> kubeConfig: /root/.kube/config
Dec 16 14:33:13.267: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Dec 16 14:33:13.267: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4813 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 16 14:33:13.267: INFO: >>> kubeConfig: /root/.kube/config
Dec 16 14:33:13.683: INFO: Exec stderr: ""
Dec 16 14:33:13.683: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4813 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 16 14:33:13.683: INFO: >>> kubeConfig: /root/.kube/config
Dec 16 14:33:14.219: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Dec 16 14:33:14.219: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4813 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 16 14:33:14.219: INFO: >>> kubeConfig: /root/.kube/config
Dec 16 14:33:14.590: INFO: Exec stderr: ""
Dec 16 14:33:14.590: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4813 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 16 14:33:14.590: INFO: >>> kubeConfig: /root/.kube/config
Dec 16 14:33:14.832: INFO: Exec stderr: ""
Dec 16 14:33:14.832: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4813 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 16 14:33:14.832: INFO: >>> kubeConfig: /root/.kube/config
Dec 16 14:33:15.095: INFO: Exec stderr: ""
Dec 16 14:33:15.095: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4813 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 16 14:33:15.095: INFO: >>> kubeConfig: /root/.kube/config
Dec 16 14:33:15.367: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:33:15.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-4813" for this suite.
Dec 16 14:33:59.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:33:59.518: INFO: namespace e2e-kubelet-etc-hosts-4813 deletion completed in 44.140791789s

• [SLOW TEST:68.118 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:33:59.518: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Dec 16 14:33:59.718: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-488,SelfLink:/api/v1/namespaces/watch-488/configmaps/e2e-watch-test-resource-version,UID:19c029ac-ab68-4b8f-8e3c-165449f6dbe3,ResourceVersion:16899241,Generation:0,CreationTimestamp:2019-12-16 14:33:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 16 14:33:59.719: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-488,SelfLink:/api/v1/namespaces/watch-488/configmaps/e2e-watch-test-resource-version,UID:19c029ac-ab68-4b8f-8e3c-165449f6dbe3,ResourceVersion:16899242,Generation:0,CreationTimestamp:2019-12-16 14:33:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:33:59.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-488" for this suite.
Dec 16 14:34:05.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:34:05.954: INFO: namespace watch-488 deletion completed in 6.230790956s

• [SLOW TEST:6.435 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:34:05.954: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Dec 16 14:34:06.648: INFO: created pod pod-service-account-defaultsa
Dec 16 14:34:06.648: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Dec 16 14:34:06.699: INFO: created pod pod-service-account-mountsa
Dec 16 14:34:06.699: INFO: pod pod-service-account-mountsa service account token volume mount: true
Dec 16 14:34:06.709: INFO: created pod pod-service-account-nomountsa
Dec 16 14:34:06.709: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Dec 16 14:34:06.765: INFO: created pod pod-service-account-defaultsa-mountspec
Dec 16 14:34:06.765: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Dec 16 14:34:06.785: INFO: created pod pod-service-account-mountsa-mountspec
Dec 16 14:34:06.785: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Dec 16 14:34:06.882: INFO: created pod pod-service-account-nomountsa-mountspec
Dec 16 14:34:06.882: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Dec 16 14:34:06.916: INFO: created pod pod-service-account-defaultsa-nomountspec
Dec 16 14:34:06.917: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Dec 16 14:34:06.966: INFO: created pod pod-service-account-mountsa-nomountspec
Dec 16 14:34:06.966: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Dec 16 14:34:07.057: INFO: created pod pod-service-account-nomountsa-nomountspec
Dec 16 14:34:07.057: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:34:07.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-573" for this suite.
Dec 16 14:34:37.305: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:34:37.449: INFO: namespace svcaccounts-573 deletion completed in 30.343495612s

• [SLOW TEST:31.495 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:34:37.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 16 14:34:37.608: INFO: Waiting up to 5m0s for pod "pod-874e26ba-d3c2-4cb6-b2d0-cc4b297bada9" in namespace "emptydir-7345" to be "success or failure"
Dec 16 14:34:37.683: INFO: Pod "pod-874e26ba-d3c2-4cb6-b2d0-cc4b297bada9": Phase="Pending", Reason="", readiness=false. Elapsed: 75.474366ms
Dec 16 14:34:39.693: INFO: Pod "pod-874e26ba-d3c2-4cb6-b2d0-cc4b297bada9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084768601s
Dec 16 14:34:41.700: INFO: Pod "pod-874e26ba-d3c2-4cb6-b2d0-cc4b297bada9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091627126s
Dec 16 14:34:43.720: INFO: Pod "pod-874e26ba-d3c2-4cb6-b2d0-cc4b297bada9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112167393s
Dec 16 14:34:45.727: INFO: Pod "pod-874e26ba-d3c2-4cb6-b2d0-cc4b297bada9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.119190094s
Dec 16 14:34:47.759: INFO: Pod "pod-874e26ba-d3c2-4cb6-b2d0-cc4b297bada9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.15157912s
STEP: Saw pod success
Dec 16 14:34:47.760: INFO: Pod "pod-874e26ba-d3c2-4cb6-b2d0-cc4b297bada9" satisfied condition "success or failure"
Dec 16 14:34:47.765: INFO: Trying to get logs from node iruya-node pod pod-874e26ba-d3c2-4cb6-b2d0-cc4b297bada9 container test-container: 
STEP: delete the pod
Dec 16 14:34:47.868: INFO: Waiting for pod pod-874e26ba-d3c2-4cb6-b2d0-cc4b297bada9 to disappear
Dec 16 14:34:47.875: INFO: Pod pod-874e26ba-d3c2-4cb6-b2d0-cc4b297bada9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:34:47.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7345" for this suite.
Dec 16 14:34:53.926: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:34:54.069: INFO: namespace emptydir-7345 deletion completed in 6.185967505s

• [SLOW TEST:16.620 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:34:54.071: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Dec 16 14:34:54.301: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-42,SelfLink:/api/v1/namespaces/watch-42/configmaps/e2e-watch-test-configmap-a,UID:b67357bc-96fb-4ab0-a941-456fb4827189,ResourceVersion:16899439,Generation:0,CreationTimestamp:2019-12-16 14:34:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 16 14:34:54.302: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-42,SelfLink:/api/v1/namespaces/watch-42/configmaps/e2e-watch-test-configmap-a,UID:b67357bc-96fb-4ab0-a941-456fb4827189,ResourceVersion:16899439,Generation:0,CreationTimestamp:2019-12-16 14:34:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Dec 16 14:35:04.318: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-42,SelfLink:/api/v1/namespaces/watch-42/configmaps/e2e-watch-test-configmap-a,UID:b67357bc-96fb-4ab0-a941-456fb4827189,ResourceVersion:16899453,Generation:0,CreationTimestamp:2019-12-16 14:34:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Dec 16 14:35:04.318: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-42,SelfLink:/api/v1/namespaces/watch-42/configmaps/e2e-watch-test-configmap-a,UID:b67357bc-96fb-4ab0-a941-456fb4827189,ResourceVersion:16899453,Generation:0,CreationTimestamp:2019-12-16 14:34:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Dec 16 14:35:14.333: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-42,SelfLink:/api/v1/namespaces/watch-42/configmaps/e2e-watch-test-configmap-a,UID:b67357bc-96fb-4ab0-a941-456fb4827189,ResourceVersion:16899467,Generation:0,CreationTimestamp:2019-12-16 14:34:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 16 14:35:14.334: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-42,SelfLink:/api/v1/namespaces/watch-42/configmaps/e2e-watch-test-configmap-a,UID:b67357bc-96fb-4ab0-a941-456fb4827189,ResourceVersion:16899467,Generation:0,CreationTimestamp:2019-12-16 14:34:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Dec 16 14:35:24.350: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-42,SelfLink:/api/v1/namespaces/watch-42/configmaps/e2e-watch-test-configmap-a,UID:b67357bc-96fb-4ab0-a941-456fb4827189,ResourceVersion:16899481,Generation:0,CreationTimestamp:2019-12-16 14:34:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 16 14:35:24.350: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-42,SelfLink:/api/v1/namespaces/watch-42/configmaps/e2e-watch-test-configmap-a,UID:b67357bc-96fb-4ab0-a941-456fb4827189,ResourceVersion:16899481,Generation:0,CreationTimestamp:2019-12-16 14:34:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Dec 16 14:35:34.365: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-42,SelfLink:/api/v1/namespaces/watch-42/configmaps/e2e-watch-test-configmap-b,UID:57b8f0bb-1759-448e-8628-10b543322bf4,ResourceVersion:16899496,Generation:0,CreationTimestamp:2019-12-16 14:35:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 16 14:35:34.365: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-42,SelfLink:/api/v1/namespaces/watch-42/configmaps/e2e-watch-test-configmap-b,UID:57b8f0bb-1759-448e-8628-10b543322bf4,ResourceVersion:16899496,Generation:0,CreationTimestamp:2019-12-16 14:35:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Dec 16 14:35:44.383: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-42,SelfLink:/api/v1/namespaces/watch-42/configmaps/e2e-watch-test-configmap-b,UID:57b8f0bb-1759-448e-8628-10b543322bf4,ResourceVersion:16899511,Generation:0,CreationTimestamp:2019-12-16 14:35:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 16 14:35:44.383: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-42,SelfLink:/api/v1/namespaces/watch-42/configmaps/e2e-watch-test-configmap-b,UID:57b8f0bb-1759-448e-8628-10b543322bf4,ResourceVersion:16899511,Generation:0,CreationTimestamp:2019-12-16 14:35:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:35:54.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-42" for this suite.
Dec 16 14:36:00.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:36:00.585: INFO: namespace watch-42 deletion completed in 6.1831183s

• [SLOW TEST:66.515 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:36:00.586: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Dec 16 14:36:00.712: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Dec 16 14:36:00.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7602'
Dec 16 14:36:01.253: INFO: stderr: ""
Dec 16 14:36:01.253: INFO: stdout: "service/redis-slave created\n"
Dec 16 14:36:01.254: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Dec 16 14:36:01.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7602'
Dec 16 14:36:01.886: INFO: stderr: ""
Dec 16 14:36:01.886: INFO: stdout: "service/redis-master created\n"
Dec 16 14:36:01.887: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Dec 16 14:36:01.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7602'
Dec 16 14:36:02.415: INFO: stderr: ""
Dec 16 14:36:02.416: INFO: stdout: "service/frontend created\n"
Dec 16 14:36:02.416: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Dec 16 14:36:02.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7602'
Dec 16 14:36:02.993: INFO: stderr: ""
Dec 16 14:36:02.993: INFO: stdout: "deployment.apps/frontend created\n"
Dec 16 14:36:02.994: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Dec 16 14:36:02.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7602'
Dec 16 14:36:03.642: INFO: stderr: ""
Dec 16 14:36:03.642: INFO: stdout: "deployment.apps/redis-master created\n"
Dec 16 14:36:03.643: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Dec 16 14:36:03.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7602'
Dec 16 14:36:04.998: INFO: stderr: ""
Dec 16 14:36:04.998: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Dec 16 14:36:04.998: INFO: Waiting for all frontend pods to be Running.
Dec 16 14:36:30.051: INFO: Waiting for frontend to serve content.
Dec 16 14:36:31.373: INFO: Trying to add a new entry to the guestbook.
Dec 16 14:36:31.481: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Dec 16 14:36:31.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7602'
Dec 16 14:36:31.805: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 16 14:36:31.805: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Dec 16 14:36:31.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7602'
Dec 16 14:36:32.027: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 16 14:36:32.027: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Dec 16 14:36:32.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7602'
Dec 16 14:36:32.353: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 16 14:36:32.354: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Dec 16 14:36:32.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7602'
Dec 16 14:36:32.477: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 16 14:36:32.477: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Dec 16 14:36:32.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7602'
Dec 16 14:36:32.601: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 16 14:36:32.601: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Dec 16 14:36:32.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7602'
Dec 16 14:36:32.748: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 16 14:36:32.748: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:36:32.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7602" for this suite.
Dec 16 14:37:18.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:37:19.002: INFO: namespace kubectl-7602 deletion completed in 46.20806203s

• [SLOW TEST:78.416 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:37:19.003: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-8479/secret-test-8914b111-4c6a-4cb9-a926-adb0d2a6e7a0
STEP: Creating a pod to test consume secrets
Dec 16 14:37:19.326: INFO: Waiting up to 5m0s for pod "pod-configmaps-b11cbbac-fa04-4629-80ea-3044222dad2b" in namespace "secrets-8479" to be "success or failure"
Dec 16 14:37:19.437: INFO: Pod "pod-configmaps-b11cbbac-fa04-4629-80ea-3044222dad2b": Phase="Pending", Reason="", readiness=false. Elapsed: 110.77239ms
Dec 16 14:37:21.449: INFO: Pod "pod-configmaps-b11cbbac-fa04-4629-80ea-3044222dad2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122345411s
Dec 16 14:37:23.479: INFO: Pod "pod-configmaps-b11cbbac-fa04-4629-80ea-3044222dad2b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.15206887s
Dec 16 14:37:25.487: INFO: Pod "pod-configmaps-b11cbbac-fa04-4629-80ea-3044222dad2b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.160726696s
Dec 16 14:37:27.498: INFO: Pod "pod-configmaps-b11cbbac-fa04-4629-80ea-3044222dad2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.172017501s
STEP: Saw pod success
Dec 16 14:37:27.499: INFO: Pod "pod-configmaps-b11cbbac-fa04-4629-80ea-3044222dad2b" satisfied condition "success or failure"
Dec 16 14:37:27.502: INFO: Trying to get logs from node iruya-node pod pod-configmaps-b11cbbac-fa04-4629-80ea-3044222dad2b container env-test: 
STEP: delete the pod
Dec 16 14:37:27.609: INFO: Waiting for pod pod-configmaps-b11cbbac-fa04-4629-80ea-3044222dad2b to disappear
Dec 16 14:37:27.702: INFO: Pod pod-configmaps-b11cbbac-fa04-4629-80ea-3044222dad2b no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:37:27.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8479" for this suite.
Dec 16 14:37:33.726: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:37:33.878: INFO: namespace secrets-8479 deletion completed in 6.16956456s

• [SLOW TEST:14.875 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:37:33.880: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 16 14:37:34.042: INFO: Waiting up to 5m0s for pod "pod-089035ad-c8a1-40dc-abac-d716460379a7" in namespace "emptydir-5918" to be "success or failure"
Dec 16 14:37:34.080: INFO: Pod "pod-089035ad-c8a1-40dc-abac-d716460379a7": Phase="Pending", Reason="", readiness=false. Elapsed: 38.191669ms
Dec 16 14:37:36.088: INFO: Pod "pod-089035ad-c8a1-40dc-abac-d716460379a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046449522s
Dec 16 14:37:38.099: INFO: Pod "pod-089035ad-c8a1-40dc-abac-d716460379a7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057461149s
Dec 16 14:37:40.144: INFO: Pod "pod-089035ad-c8a1-40dc-abac-d716460379a7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102465554s
Dec 16 14:37:42.158: INFO: Pod "pod-089035ad-c8a1-40dc-abac-d716460379a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.116551891s
STEP: Saw pod success
Dec 16 14:37:42.159: INFO: Pod "pod-089035ad-c8a1-40dc-abac-d716460379a7" satisfied condition "success or failure"
Dec 16 14:37:42.164: INFO: Trying to get logs from node iruya-node pod pod-089035ad-c8a1-40dc-abac-d716460379a7 container test-container: 
STEP: delete the pod
Dec 16 14:37:43.431: INFO: Waiting for pod pod-089035ad-c8a1-40dc-abac-d716460379a7 to disappear
Dec 16 14:37:43.440: INFO: Pod pod-089035ad-c8a1-40dc-abac-d716460379a7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:37:43.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5918" for this suite.
Dec 16 14:37:49.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:37:49.705: INFO: namespace emptydir-5918 deletion completed in 6.25322349s

• [SLOW TEST:15.826 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:37:49.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W1216 14:38:01.324473       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 16 14:38:01.324: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:38:01.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6047" for this suite.
Dec 16 14:38:17.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:38:17.763: INFO: namespace gc-6047 deletion completed in 16.347855514s

• [SLOW TEST:28.056 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:38:17.763: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 16 14:38:17.906: INFO: Creating deployment "nginx-deployment"
Dec 16 14:38:17.922: INFO: Waiting for observed generation 1
Dec 16 14:38:22.141: INFO: Waiting for all required pods to come up
Dec 16 14:38:23.761: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Dec 16 14:38:55.488: INFO: Waiting for deployment "nginx-deployment" to complete
Dec 16 14:38:55.493: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:10, UpdatedReplicas:10, ReadyReplicas:9, AvailableReplicas:9, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712103935, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712103935, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712103935, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712103897, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"nginx-deployment-7b8c6f4498\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 16 14:38:57.509: INFO: Updating deployment "nginx-deployment" with a non-existent image
Dec 16 14:38:57.526: INFO: Updating deployment nginx-deployment
Dec 16 14:38:57.526: INFO: Waiting for observed generation 2
Dec 16 14:38:59.899: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Dec 16 14:39:00.742: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Dec 16 14:39:01.051: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Dec 16 14:39:01.088: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Dec 16 14:39:01.088: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Dec 16 14:39:01.091: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Dec 16 14:39:01.094: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Dec 16 14:39:01.094: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Dec 16 14:39:01.105: INFO: Updating deployment nginx-deployment
Dec 16 14:39:01.105: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Dec 16 14:39:02.112: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Dec 16 14:39:02.996: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 16 14:39:07.422: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-2094,SelfLink:/apis/apps/v1/namespaces/deployment-2094/deployments/nginx-deployment,UID:83e5a9a0-1203-4a5e-a543-1dc4702cc181,ResourceVersion:16900351,Generation:3,CreationTimestamp:2019-12-16 14:38:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:25,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2019-12-16 14:38:58 +0000 UTC 2019-12-16 14:38:17 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2019-12-16 14:39:02 +0000 UTC 2019-12-16 14:39:02 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Dec 16 14:39:09.621: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-2094,SelfLink:/apis/apps/v1/namespaces/deployment-2094/replicasets/nginx-deployment-55fb7cb77f,UID:3b4743a4-18e8-4fe1-b559-d4d4ee7e2c2f,ResourceVersion:16900357,Generation:3,CreationTimestamp:2019-12-16 14:38:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 83e5a9a0-1203-4a5e-a543-1dc4702cc181 0xc0030062c7 0xc0030062c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 16 14:39:09.621: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Dec 16 14:39:09.621: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-2094,SelfLink:/apis/apps/v1/namespaces/deployment-2094/replicasets/nginx-deployment-7b8c6f4498,UID:95683b85-06e5-4f37-8cae-d912e988826a,ResourceVersion:16900347,Generation:3,CreationTimestamp:2019-12-16 14:38:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 83e5a9a0-1203-4a5e-a543-1dc4702cc181 0xc003006397 0xc003006398}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Dec 16 14:39:10.976: INFO: Pod "nginx-deployment-55fb7cb77f-24n9t" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-24n9t,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2094,SelfLink:/api/v1/namespaces/deployment-2094/pods/nginx-deployment-55fb7cb77f-24n9t,UID:ae3135ef-0811-47f8-a42e-50d10f33eb36,ResourceVersion:16900344,Generation:0,CreationTimestamp:2019-12-16 14:39:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 3b4743a4-18e8-4fe1-b559-d4d4ee7e2c2f 0xc00298d5c7 0xc00298d5c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j9nk9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j9nk9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-j9nk9 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00298d640} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00298d660}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:39:04 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 16 14:39:10.977: INFO: Pod "nginx-deployment-55fb7cb77f-2p6rn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-2p6rn,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2094,SelfLink:/api/v1/namespaces/deployment-2094/pods/nginx-deployment-55fb7cb77f-2p6rn,UID:5f7c3b33-f00a-44fe-bdf1-b3e18474dd92,ResourceVersion:16900329,Generation:0,CreationTimestamp:2019-12-16 14:39:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 3b4743a4-18e8-4fe1-b559-d4d4ee7e2c2f 0xc00298d6e7 0xc00298d6e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j9nk9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j9nk9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-j9nk9 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00298d760} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00298d780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:39:03 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 16 14:39:10.977: INFO: Pod "nginx-deployment-55fb7cb77f-6mxhl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6mxhl,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2094,SelfLink:/api/v1/namespaces/deployment-2094/pods/nginx-deployment-55fb7cb77f-6mxhl,UID:03cba946-755d-4060-9c7b-8bf618e6c2d9,ResourceVersion:16900345,Generation:0,CreationTimestamp:2019-12-16 14:39:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 3b4743a4-18e8-4fe1-b559-d4d4ee7e2c2f 0xc00298d807 0xc00298d808}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j9nk9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j9nk9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-j9nk9 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00298d870} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00298d890}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:39:04 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 16 14:39:10.977: INFO: Pod "nginx-deployment-55fb7cb77f-8rslb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8rslb,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2094,SelfLink:/api/v1/namespaces/deployment-2094/pods/nginx-deployment-55fb7cb77f-8rslb,UID:cb1732a9-1829-46ee-8584-ec1956667cb1,ResourceVersion:16900266,Generation:0,CreationTimestamp:2019-12-16 14:38:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 3b4743a4-18e8-4fe1-b559-d4d4ee7e2c2f 0xc00298d917 0xc00298d918}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j9nk9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j9nk9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-j9nk9 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00298d990} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00298d9b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:57 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-16 14:38:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 16 14:39:10.977: INFO: Pod "nginx-deployment-55fb7cb77f-bsxdj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-bsxdj,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2094,SelfLink:/api/v1/namespaces/deployment-2094/pods/nginx-deployment-55fb7cb77f-bsxdj,UID:f96e9f2a-9687-4052-b486-9ab0d0e31a1c,ResourceVersion:16900291,Generation:0,CreationTimestamp:2019-12-16 14:38:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 3b4743a4-18e8-4fe1-b559-d4d4ee7e2c2f 0xc00298da87 0xc00298da88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j9nk9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j9nk9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-j9nk9 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00298daf0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00298db10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:58 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-16 14:38:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 16 14:39:10.977: INFO: Pod "nginx-deployment-55fb7cb77f-bztzs" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-bztzs,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2094,SelfLink:/api/v1/namespaces/deployment-2094/pods/nginx-deployment-55fb7cb77f-bztzs,UID:336ad45b-9e7f-48ca-8919-470cd0ba20c7,ResourceVersion:16900356,Generation:0,CreationTimestamp:2019-12-16 14:39:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 3b4743a4-18e8-4fe1-b559-d4d4ee7e2c2f 0xc00298dbe7 0xc00298dbe8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j9nk9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j9nk9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-j9nk9 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00298dc60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00298dc80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:39:05 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 16 14:39:10.978: INFO: Pod "nginx-deployment-55fb7cb77f-hxpb5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hxpb5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2094,SelfLink:/api/v1/namespaces/deployment-2094/pods/nginx-deployment-55fb7cb77f-hxpb5,UID:146b30d0-d60d-4e17-8187-eab1fb5ec500,ResourceVersion:16900288,Generation:0,CreationTimestamp:2019-12-16 14:38:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 3b4743a4-18e8-4fe1-b559-d4d4ee7e2c2f 0xc00298dd07 0xc00298dd08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j9nk9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j9nk9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-j9nk9 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00298dd70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00298dd90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:57 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-16 14:38:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 16 14:39:10.978: INFO: Pod "nginx-deployment-55fb7cb77f-kj4xp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-kj4xp,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2094,SelfLink:/api/v1/namespaces/deployment-2094/pods/nginx-deployment-55fb7cb77f-kj4xp,UID:4de84c51-4ce3-449b-85e4-6e23b798b667,ResourceVersion:16900293,Generation:0,CreationTimestamp:2019-12-16 14:38:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 3b4743a4-18e8-4fe1-b559-d4d4ee7e2c2f 0xc00298de67 0xc00298de68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j9nk9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j9nk9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-j9nk9 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00298dee0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00298df00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:39:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:39:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:39:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:58 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-16 14:39:00 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 16 14:39:10.978: INFO: Pod "nginx-deployment-55fb7cb77f-kpxqb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-kpxqb,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2094,SelfLink:/api/v1/namespaces/deployment-2094/pods/nginx-deployment-55fb7cb77f-kpxqb,UID:6bb99801-eda7-4372-b62c-52004b6b93ca,ResourceVersion:16900348,Generation:0,CreationTimestamp:2019-12-16 14:39:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 3b4743a4-18e8-4fe1-b559-d4d4ee7e2c2f 0xc00298dfd7 0xc00298dfd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j9nk9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j9nk9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-j9nk9 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026783d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026783f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:39:04 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 16 14:39:10.978: INFO: Pod "nginx-deployment-55fb7cb77f-ktl2c" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-ktl2c,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2094,SelfLink:/api/v1/namespaces/deployment-2094/pods/nginx-deployment-55fb7cb77f-ktl2c,UID:552398f9-9712-4819-be6a-b6cc92e89ce2,ResourceVersion:16900335,Generation:0,CreationTimestamp:2019-12-16 14:39:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 3b4743a4-18e8-4fe1-b559-d4d4ee7e2c2f 0xc002678547 0xc002678548}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j9nk9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j9nk9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-j9nk9 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002678690} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026786b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:39:03 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 16 14:39:10.979: INFO: Pod "nginx-deployment-55fb7cb77f-np7kb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-np7kb,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2094,SelfLink:/api/v1/namespaces/deployment-2094/pods/nginx-deployment-55fb7cb77f-np7kb,UID:b426aaf7-be80-41de-80cd-435d0acbd4f1,ResourceVersion:16900346,Generation:0,CreationTimestamp:2019-12-16 14:39:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 3b4743a4-18e8-4fe1-b559-d4d4ee7e2c2f 0xc0026788d7 0xc0026788d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j9nk9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j9nk9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-j9nk9 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002678a90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002678ab0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:39:04 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 16 14:39:10.979: INFO: Pod "nginx-deployment-55fb7cb77f-ql9dk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-ql9dk,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2094,SelfLink:/api/v1/namespaces/deployment-2094/pods/nginx-deployment-55fb7cb77f-ql9dk,UID:4243f754-5e60-451b-93a6-400bf2e999fd,ResourceVersion:16900314,Generation:0,CreationTimestamp:2019-12-16 14:39:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 3b4743a4-18e8-4fe1-b559-d4d4ee7e2c2f 0xc002678bc7 0xc002678bc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j9nk9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j9nk9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-j9nk9 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002678e30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002678e80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:39:02 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 16 14:39:10.979: INFO: Pod "nginx-deployment-55fb7cb77f-tcrlz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-tcrlz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2094,SelfLink:/api/v1/namespaces/deployment-2094/pods/nginx-deployment-55fb7cb77f-tcrlz,UID:5180ab93-31fc-4539-9971-b6f70767dbe2,ResourceVersion:16900282,Generation:0,CreationTimestamp:2019-12-16 14:38:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 3b4743a4-18e8-4fe1-b559-d4d4ee7e2c2f 0xc002679027 0xc002679028}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j9nk9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j9nk9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-j9nk9 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002679230} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002679250}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:57 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-16 14:38:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 16 14:39:10.979: INFO: Pod "nginx-deployment-7b8c6f4498-2k2dm" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2k2dm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2094,SelfLink:/api/v1/namespaces/deployment-2094/pods/nginx-deployment-7b8c6f4498-2k2dm,UID:21b17118-f304-4097-87f6-66776c39f622,ResourceVersion:16900200,Generation:0,CreationTimestamp:2019-12-16 14:38:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 95683b85-06e5-4f37-8cae-d912e988826a 0xc002679527 0xc002679528}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j9nk9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j9nk9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-j9nk9 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002679760} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002679780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:23 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:52 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:20 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.5,StartTime:2019-12-16 14:38:23 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-16 14:38:51 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://304d4094348c13917d98c26a39979b7ee11248ff8fa290b6ea170a31b60a7bbf}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 16 14:39:10.979: INFO: Pod "nginx-deployment-7b8c6f4498-5cd9x" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5cd9x,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2094,SelfLink:/api/v1/namespaces/deployment-2094/pods/nginx-deployment-7b8c6f4498-5cd9x,UID:3cb56118-8688-4334-a9ba-086f0d2786ef,ResourceVersion:16900366,Generation:0,CreationTimestamp:2019-12-16 14:39:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 95683b85-06e5-4f37-8cae-d912e988826a 0xc002679aa7 0xc002679aa8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j9nk9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j9nk9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-j9nk9 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002679c10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002679cf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:39:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:39:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:39:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:39:02 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-16 14:39:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 16 14:39:10.979: INFO: Pod "nginx-deployment-7b8c6f4498-7vzh5" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7vzh5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2094,SelfLink:/api/v1/namespaces/deployment-2094/pods/nginx-deployment-7b8c6f4498-7vzh5,UID:593c7c80-2946-4b39-96f8-1353777b53d1,ResourceVersion:16900224,Generation:0,CreationTimestamp:2019-12-16 14:38:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 95683b85-06e5-4f37-8cae-d912e988826a 0xc002679f27 0xc002679f28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j9nk9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j9nk9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-j9nk9 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001fb6040} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001fb6060}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:23 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:54 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:54 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:20 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.7,StartTime:2019-12-16 14:38:23 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-16 14:38:53 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://d36962aab244aeec17a3b8c9c84ed1db681ba9dcdccf1999ad8ee52ab914e9b6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 16 14:39:10.980: INFO: Pod "nginx-deployment-7b8c6f4498-98vqc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-98vqc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2094,SelfLink:/api/v1/namespaces/deployment-2094/pods/nginx-deployment-7b8c6f4498-98vqc,UID:ae618ab5-c74b-45d6-acb6-bcd9702184e8,ResourceVersion:16900358,Generation:0,CreationTimestamp:2019-12-16 14:39:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 95683b85-06e5-4f37-8cae-d912e988826a 0xc001fb6137 0xc001fb6138}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j9nk9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j9nk9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-j9nk9 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001fb61b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001fb61d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:39:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:39:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:39:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:39:02 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-16 14:39:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 16 14:39:10.980: INFO: Pod "nginx-deployment-7b8c6f4498-bb2sq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bb2sq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2094,SelfLink:/api/v1/namespaces/deployment-2094/pods/nginx-deployment-7b8c6f4498-bb2sq,UID:d0c250d7-48a9-41f1-8d66-6f5f6e7d108d,ResourceVersion:16900321,Generation:0,CreationTimestamp:2019-12-16 14:39:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 95683b85-06e5-4f37-8cae-d912e988826a 0xc001fb6297 0xc001fb6298}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j9nk9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j9nk9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-j9nk9 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001fb6310} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001fb6330}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:39:02 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 16 14:39:10.980: INFO: Pod "nginx-deployment-7b8c6f4498-dhmzf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dhmzf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2094,SelfLink:/api/v1/namespaces/deployment-2094/pods/nginx-deployment-7b8c6f4498-dhmzf,UID:caa857fc-ebf9-45c5-b9a3-7a78e53ad906,ResourceVersion:16900337,Generation:0,CreationTimestamp:2019-12-16 14:39:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 95683b85-06e5-4f37-8cae-d912e988826a 0xc001fb63b7 0xc001fb63b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j9nk9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j9nk9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-j9nk9 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001fb6420} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001fb6440}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:39:03 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 16 14:39:10.980: INFO: Pod "nginx-deployment-7b8c6f4498-fdmzt" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fdmzt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2094,SelfLink:/api/v1/namespaces/deployment-2094/pods/nginx-deployment-7b8c6f4498-fdmzt,UID:bbe5ebfc-e723-4bd5-b18e-0a85044d10c4,ResourceVersion:16900213,Generation:0,CreationTimestamp:2019-12-16 14:38:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 95683b85-06e5-4f37-8cae-d912e988826a 0xc001fb64c7 0xc001fb64c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j9nk9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j9nk9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-j9nk9 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001fb6540} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001fb6560}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:22 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:52 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:19 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2019-12-16 14:38:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-16 14:38:50 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f5c0dc90fd715f6c26ad93ec4c51c25b8434132aa16f5ed7898a3dfeda078d81}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 16 14:39:10.980: INFO: Pod "nginx-deployment-7b8c6f4498-flz9m" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-flz9m,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2094,SelfLink:/api/v1/namespaces/deployment-2094/pods/nginx-deployment-7b8c6f4498-flz9m,UID:6be78e3e-e84a-4dcf-b2c1-ef1a8fb9edbb,ResourceVersion:16900350,Generation:0,CreationTimestamp:2019-12-16 14:39:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 95683b85-06e5-4f37-8cae-d912e988826a 0xc001fb6637 0xc001fb6638}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j9nk9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j9nk9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-j9nk9 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001fb66a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001fb66c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:39:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:39:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:39:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:39:01 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-16 14:39:02 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 16 14:39:10.980: INFO: Pod "nginx-deployment-7b8c6f4498-gdd7q" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gdd7q,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2094,SelfLink:/api/v1/namespaces/deployment-2094/pods/nginx-deployment-7b8c6f4498-gdd7q,UID:8720efad-7844-4200-b890-3c281e074837,ResourceVersion:16900324,Generation:0,CreationTimestamp:2019-12-16 14:39:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 95683b85-06e5-4f37-8cae-d912e988826a 0xc001fb6787 0xc001fb6788}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j9nk9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j9nk9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-j9nk9 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001fb6800} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001fb6820}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:39:02 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 16 14:39:10.981: INFO: Pod "nginx-deployment-7b8c6f4498-j5l9b" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-j5l9b,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2094,SelfLink:/api/v1/namespaces/deployment-2094/pods/nginx-deployment-7b8c6f4498-j5l9b,UID:6ea6d6ee-0469-49ed-80f5-f71a628870b6,ResourceVersion:16900340,Generation:0,CreationTimestamp:2019-12-16 14:39:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 95683b85-06e5-4f37-8cae-d912e988826a 0xc001fb68a7 0xc001fb68a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j9nk9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j9nk9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-j9nk9 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001fb6920} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001fb6940}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:39:03 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 16 14:39:10.981: INFO: Pod "nginx-deployment-7b8c6f4498-jqg8z" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jqg8z,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2094,SelfLink:/api/v1/namespaces/deployment-2094/pods/nginx-deployment-7b8c6f4498-jqg8z,UID:bb8e8b38-5bd0-4099-a943-52928e3a5675,ResourceVersion:16900229,Generation:0,CreationTimestamp:2019-12-16 14:38:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 95683b85-06e5-4f37-8cae-d912e988826a 0xc001fb69c7 0xc001fb69c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j9nk9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j9nk9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-j9nk9 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001fb6a30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001fb6a50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:20 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:54 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:54 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:19 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2019-12-16 14:38:20 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-16 14:38:49 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://4d93c67b7f832f90350087cf513919d91f44eed7264d265d544a65e168478316}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 16 14:39:10.981: INFO: Pod "nginx-deployment-7b8c6f4498-kqx68" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kqx68,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2094,SelfLink:/api/v1/namespaces/deployment-2094/pods/nginx-deployment-7b8c6f4498-kqx68,UID:845c4a5d-1547-45b4-980b-adfe570c9276,ResourceVersion:16900339,Generation:0,CreationTimestamp:2019-12-16 14:39:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 95683b85-06e5-4f37-8cae-d912e988826a 0xc001fb6b27 0xc001fb6b28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j9nk9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j9nk9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-j9nk9 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001fb6b90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001fb6bb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:39:03 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 16 14:39:10.981: INFO: Pod "nginx-deployment-7b8c6f4498-kzfgm" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kzfgm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2094,SelfLink:/api/v1/namespaces/deployment-2094/pods/nginx-deployment-7b8c6f4498-kzfgm,UID:a9328f5f-9001-4e96-ab45-356ddd90e3e8,ResourceVersion:16900198,Generation:0,CreationTimestamp:2019-12-16 14:38:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 95683b85-06e5-4f37-8cae-d912e988826a 0xc001fb6c37 0xc001fb6c38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j9nk9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j9nk9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-j9nk9 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001fb6cb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001fb6cd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:19 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:52 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:18 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2019-12-16 14:38:19 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-16 14:38:50 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://a8e8d4b48ba8c632dfd6dffd6de1d4aaab75bcc00c51a629edbdd3a419833001}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 16 14:39:10.981: INFO: Pod "nginx-deployment-7b8c6f4498-p589p" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-p589p,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2094,SelfLink:/api/v1/namespaces/deployment-2094/pods/nginx-deployment-7b8c6f4498-p589p,UID:10a0e9bf-c50d-47b0-9da0-a3b72e24dbaa,ResourceVersion:16900341,Generation:0,CreationTimestamp:2019-12-16 14:39:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 95683b85-06e5-4f37-8cae-d912e988826a 0xc001fb6da7 0xc001fb6da8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j9nk9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j9nk9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-j9nk9 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001fb6e20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001fb6e40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:39:03 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 16 14:39:10.981: INFO: Pod "nginx-deployment-7b8c6f4498-p687k" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-p687k,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2094,SelfLink:/api/v1/namespaces/deployment-2094/pods/nginx-deployment-7b8c6f4498-p687k,UID:e8b15ff8-5092-4b1f-bd11-bbd615499253,ResourceVersion:16900327,Generation:0,CreationTimestamp:2019-12-16 14:39:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 95683b85-06e5-4f37-8cae-d912e988826a 0xc001fb6ed7 0xc001fb6ed8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j9nk9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j9nk9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-j9nk9 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001fb6f40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001fb6f60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:39:02 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 16 14:39:10.981: INFO: Pod "nginx-deployment-7b8c6f4498-q9tnm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-q9tnm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2094,SelfLink:/api/v1/namespaces/deployment-2094/pods/nginx-deployment-7b8c6f4498-q9tnm,UID:6653a7fb-f76a-46f8-9125-21f1c76eddc5,ResourceVersion:16900325,Generation:0,CreationTimestamp:2019-12-16 14:39:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 95683b85-06e5-4f37-8cae-d912e988826a 0xc001fb6fe7 0xc001fb6fe8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j9nk9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j9nk9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-j9nk9 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001fb7060} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001fb7080}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:39:02 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 16 14:39:10.981: INFO: Pod "nginx-deployment-7b8c6f4498-qp7z9" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qp7z9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2094,SelfLink:/api/v1/namespaces/deployment-2094/pods/nginx-deployment-7b8c6f4498-qp7z9,UID:9997eabf-585d-43a6-a90e-ed3b3525a213,ResourceVersion:16900206,Generation:0,CreationTimestamp:2019-12-16 14:38:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 95683b85-06e5-4f37-8cae-d912e988826a 0xc001fb7107 0xc001fb7108}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j9nk9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j9nk9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-j9nk9 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001fb7180} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001fb71a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:20 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:52 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:19 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2019-12-16 14:38:20 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-16 14:38:51 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://2ab2f13455cdc9fb8acbb805f9cfd9d0fe7dcf65ecd1949331fae579c8336311}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 16 14:39:10.982: INFO: Pod "nginx-deployment-7b8c6f4498-szt4x" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-szt4x,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2094,SelfLink:/api/v1/namespaces/deployment-2094/pods/nginx-deployment-7b8c6f4498-szt4x,UID:9b4dcf3f-fc19-40d1-a282-07e173d2d486,ResourceVersion:16900218,Generation:0,CreationTimestamp:2019-12-16 14:38:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 95683b85-06e5-4f37-8cae-d912e988826a 0xc001fb7277 0xc001fb7278}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j9nk9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j9nk9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-j9nk9 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001fb72e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001fb7300}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:22 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:53 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:53 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:20 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2019-12-16 14:38:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-16 14:38:53 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://3f9c347338ee64c4d5c730707f73212c0a520bccb2ae72419a605ca2fd390f6c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 16 14:39:10.982: INFO: Pod "nginx-deployment-7b8c6f4498-x7l98" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-x7l98,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2094,SelfLink:/api/v1/namespaces/deployment-2094/pods/nginx-deployment-7b8c6f4498-x7l98,UID:ea51f683-a4e4-403a-94ab-8e9a689b13ea,ResourceVersion:16900336,Generation:0,CreationTimestamp:2019-12-16 14:39:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 95683b85-06e5-4f37-8cae-d912e988826a 0xc001fb73d7 0xc001fb73d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j9nk9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j9nk9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-j9nk9 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001fb7450} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001fb7470}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:39:03 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 16 14:39:10.982: INFO: Pod "nginx-deployment-7b8c6f4498-zpcf5" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zpcf5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2094,SelfLink:/api/v1/namespaces/deployment-2094/pods/nginx-deployment-7b8c6f4498-zpcf5,UID:1fb915ff-55d6-4bb4-8a98-9b62fd63369d,ResourceVersion:16900210,Generation:0,CreationTimestamp:2019-12-16 14:38:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 95683b85-06e5-4f37-8cae-d912e988826a 0xc001fb74f7 0xc001fb74f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j9nk9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j9nk9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-j9nk9 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001fb7570} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001fb7590}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:20 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:52 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 14:38:19 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.4,StartTime:2019-12-16 14:38:20 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-16 14:38:51 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://5e3ba6ad3eb027531a57777ae9b6844959110073f2514d315820e8ac7a270021}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:39:10.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2094" for this suite.
Dec 16 14:40:40.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:40:40.915: INFO: namespace deployment-2094 deletion completed in 1m27.647446195s

• [SLOW TEST:143.152 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:40:40.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Dec 16 14:40:41.081: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:40:58.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9297" for this suite.
Dec 16 14:41:04.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:41:04.670: INFO: namespace init-container-9297 deletion completed in 6.171723006s

• [SLOW TEST:23.754 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:41:04.670: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-ae2faeb6-b7cc-4dca-8b76-a89256938b0f
STEP: Creating configMap with name cm-test-opt-upd-2f2ce50c-f2e0-4f4c-9b5d-5ca413f59190
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-ae2faeb6-b7cc-4dca-8b76-a89256938b0f
STEP: Updating configmap cm-test-opt-upd-2f2ce50c-f2e0-4f4c-9b5d-5ca413f59190
STEP: Creating configMap with name cm-test-opt-create-7faf9e3f-25a9-474c-b170-c00e77a9c2c3
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:41:19.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3579" for this suite.
Dec 16 14:41:53.098: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:41:53.197: INFO: namespace projected-3579 deletion completed in 34.173124925s

• [SLOW TEST:48.527 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:41:53.197: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Dec 16 14:42:02.003: INFO: Successfully updated pod "labelsupdate50aa5b5f-33f2-411d-9959-7358f467df91"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:42:04.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3077" for this suite.
Dec 16 14:42:44.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:42:44.179: INFO: namespace downward-api-3077 deletion completed in 40.10049292s

• [SLOW TEST:50.982 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:42:44.180: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 16 14:42:44.380: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6baa1958-06f7-446b-9075-9defa8e69854" in namespace "projected-2684" to be "success or failure"
Dec 16 14:42:44.391: INFO: Pod "downwardapi-volume-6baa1958-06f7-446b-9075-9defa8e69854": Phase="Pending", Reason="", readiness=false. Elapsed: 11.069787ms
Dec 16 14:42:46.400: INFO: Pod "downwardapi-volume-6baa1958-06f7-446b-9075-9defa8e69854": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020274141s
Dec 16 14:42:48.413: INFO: Pod "downwardapi-volume-6baa1958-06f7-446b-9075-9defa8e69854": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033507993s
Dec 16 14:42:50.471: INFO: Pod "downwardapi-volume-6baa1958-06f7-446b-9075-9defa8e69854": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090685124s
Dec 16 14:42:52.740: INFO: Pod "downwardapi-volume-6baa1958-06f7-446b-9075-9defa8e69854": Phase="Pending", Reason="", readiness=false. Elapsed: 8.360213834s
Dec 16 14:42:54.789: INFO: Pod "downwardapi-volume-6baa1958-06f7-446b-9075-9defa8e69854": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.408692543s
STEP: Saw pod success
Dec 16 14:42:54.789: INFO: Pod "downwardapi-volume-6baa1958-06f7-446b-9075-9defa8e69854" satisfied condition "success or failure"
Dec 16 14:42:54.796: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-6baa1958-06f7-446b-9075-9defa8e69854 container client-container: 
STEP: delete the pod
Dec 16 14:42:54.966: INFO: Waiting for pod downwardapi-volume-6baa1958-06f7-446b-9075-9defa8e69854 to disappear
Dec 16 14:42:54.974: INFO: Pod downwardapi-volume-6baa1958-06f7-446b-9075-9defa8e69854 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:42:54.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2684" for this suite.
Dec 16 14:43:01.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:43:01.184: INFO: namespace projected-2684 deletion completed in 6.201688949s

• [SLOW TEST:17.004 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:43:01.184: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Dec 16 14:43:01.246: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 16 14:43:01.252: INFO: Waiting for terminating namespaces to be deleted...
Dec 16 14:43:01.254: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Dec 16 14:43:01.263: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Dec 16 14:43:01.263: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 16 14:43:01.263: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Dec 16 14:43:01.263: INFO: 	Container weave ready: true, restart count 0
Dec 16 14:43:01.263: INFO: 	Container weave-npc ready: true, restart count 0
Dec 16 14:43:01.263: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Dec 16 14:43:01.304: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Dec 16 14:43:01.304: INFO: 	Container kube-scheduler ready: true, restart count 7
Dec 16 14:43:01.304: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 16 14:43:01.304: INFO: 	Container coredns ready: true, restart count 0
Dec 16 14:43:01.304: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 16 14:43:01.304: INFO: 	Container coredns ready: true, restart count 0
Dec 16 14:43:01.304: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Dec 16 14:43:01.304: INFO: 	Container etcd ready: true, restart count 0
Dec 16 14:43:01.304: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Dec 16 14:43:01.304: INFO: 	Container weave ready: true, restart count 0
Dec 16 14:43:01.304: INFO: 	Container weave-npc ready: true, restart count 0
Dec 16 14:43:01.304: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Dec 16 14:43:01.304: INFO: 	Container kube-controller-manager ready: true, restart count 10
Dec 16 14:43:01.304: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Dec 16 14:43:01.304: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 16 14:43:01.304: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Dec 16 14:43:01.304: INFO: 	Container kube-apiserver ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-node
STEP: verifying the node has the label node iruya-server-sfge57q7djm7
Dec 16 14:43:01.368: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Dec 16 14:43:01.368: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Dec 16 14:43:01.368: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Dec 16 14:43:01.368: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7
Dec 16 14:43:01.368: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7
Dec 16 14:43:01.368: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Dec 16 14:43:01.368: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node
Dec 16 14:43:01.368: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Dec 16 14:43:01.368: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7
Dec 16 14:43:01.368: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-0e109312-5cb5-41f3-9b79-cfaa604dae73.15e0e11c5c0b9d55], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1468/filler-pod-0e109312-5cb5-41f3-9b79-cfaa604dae73 to iruya-server-sfge57q7djm7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-0e109312-5cb5-41f3-9b79-cfaa604dae73.15e0e11d6ba95a52], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-0e109312-5cb5-41f3-9b79-cfaa604dae73.15e0e11e4d13093b], Reason = [Created], Message = [Created container filler-pod-0e109312-5cb5-41f3-9b79-cfaa604dae73]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-0e109312-5cb5-41f3-9b79-cfaa604dae73.15e0e11e68b190ea], Reason = [Started], Message = [Started container filler-pod-0e109312-5cb5-41f3-9b79-cfaa604dae73]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-3905063e-0d98-4ce4-b3dc-725fe9d443d8.15e0e11c55e93767], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1468/filler-pod-3905063e-0d98-4ce4-b3dc-725fe9d443d8 to iruya-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-3905063e-0d98-4ce4-b3dc-725fe9d443d8.15e0e11d6d0d38d6], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-3905063e-0d98-4ce4-b3dc-725fe9d443d8.15e0e11e1ec22f88], Reason = [Created], Message = [Created container filler-pod-3905063e-0d98-4ce4-b3dc-725fe9d443d8]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-3905063e-0d98-4ce4-b3dc-725fe9d443d8.15e0e11e43873f53], Reason = [Started], Message = [Started container filler-pod-3905063e-0d98-4ce4-b3dc-725fe9d443d8]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15e0e11eb272dd72], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-server-sfge57q7djm7
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:43:12.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1468" for this suite.
Dec 16 14:43:19.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:43:20.249: INFO: namespace sched-pred-1468 deletion completed in 7.487111079s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:19.066 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:43:20.250: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Dec 16 14:43:20.416: INFO: Waiting up to 5m0s for pod "downward-api-52a08c13-425d-4c4c-802e-7d71f7cfa831" in namespace "downward-api-8399" to be "success or failure"
Dec 16 14:43:20.471: INFO: Pod "downward-api-52a08c13-425d-4c4c-802e-7d71f7cfa831": Phase="Pending", Reason="", readiness=false. Elapsed: 55.182628ms
Dec 16 14:43:22.697: INFO: Pod "downward-api-52a08c13-425d-4c4c-802e-7d71f7cfa831": Phase="Pending", Reason="", readiness=false. Elapsed: 2.280925484s
Dec 16 14:43:24.711: INFO: Pod "downward-api-52a08c13-425d-4c4c-802e-7d71f7cfa831": Phase="Pending", Reason="", readiness=false. Elapsed: 4.295560023s
Dec 16 14:43:26.725: INFO: Pod "downward-api-52a08c13-425d-4c4c-802e-7d71f7cfa831": Phase="Pending", Reason="", readiness=false. Elapsed: 6.308877619s
Dec 16 14:43:28.732: INFO: Pod "downward-api-52a08c13-425d-4c4c-802e-7d71f7cfa831": Phase="Pending", Reason="", readiness=false. Elapsed: 8.31640148s
Dec 16 14:43:30.743: INFO: Pod "downward-api-52a08c13-425d-4c4c-802e-7d71f7cfa831": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.326699062s
STEP: Saw pod success
Dec 16 14:43:30.743: INFO: Pod "downward-api-52a08c13-425d-4c4c-802e-7d71f7cfa831" satisfied condition "success or failure"
Dec 16 14:43:30.746: INFO: Trying to get logs from node iruya-node pod downward-api-52a08c13-425d-4c4c-802e-7d71f7cfa831 container dapi-container: 
STEP: delete the pod
Dec 16 14:43:30.878: INFO: Waiting for pod downward-api-52a08c13-425d-4c4c-802e-7d71f7cfa831 to disappear
Dec 16 14:43:30.888: INFO: Pod downward-api-52a08c13-425d-4c4c-802e-7d71f7cfa831 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:43:30.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8399" for this suite.
Dec 16 14:43:36.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:43:37.066: INFO: namespace downward-api-8399 deletion completed in 6.167187033s

• [SLOW TEST:16.816 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:43:37.067: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-lth6
STEP: Creating a pod to test atomic-volume-subpath
Dec 16 14:43:37.232: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-lth6" in namespace "subpath-4071" to be "success or failure"
Dec 16 14:43:37.270: INFO: Pod "pod-subpath-test-downwardapi-lth6": Phase="Pending", Reason="", readiness=false. Elapsed: 37.581629ms
Dec 16 14:43:39.287: INFO: Pod "pod-subpath-test-downwardapi-lth6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054302833s
Dec 16 14:43:41.292: INFO: Pod "pod-subpath-test-downwardapi-lth6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059581005s
Dec 16 14:43:43.306: INFO: Pod "pod-subpath-test-downwardapi-lth6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073744516s
Dec 16 14:43:45.315: INFO: Pod "pod-subpath-test-downwardapi-lth6": Phase="Running", Reason="", readiness=true. Elapsed: 8.082729516s
Dec 16 14:43:47.326: INFO: Pod "pod-subpath-test-downwardapi-lth6": Phase="Running", Reason="", readiness=true. Elapsed: 10.093186324s
Dec 16 14:43:49.335: INFO: Pod "pod-subpath-test-downwardapi-lth6": Phase="Running", Reason="", readiness=true. Elapsed: 12.102981002s
Dec 16 14:43:51.345: INFO: Pod "pod-subpath-test-downwardapi-lth6": Phase="Running", Reason="", readiness=true. Elapsed: 14.112154794s
Dec 16 14:43:53.511: INFO: Pod "pod-subpath-test-downwardapi-lth6": Phase="Running", Reason="", readiness=true. Elapsed: 16.27892202s
Dec 16 14:43:55.522: INFO: Pod "pod-subpath-test-downwardapi-lth6": Phase="Running", Reason="", readiness=true. Elapsed: 18.290063254s
Dec 16 14:43:57.531: INFO: Pod "pod-subpath-test-downwardapi-lth6": Phase="Running", Reason="", readiness=true. Elapsed: 20.298304593s
Dec 16 14:43:59.541: INFO: Pod "pod-subpath-test-downwardapi-lth6": Phase="Running", Reason="", readiness=true. Elapsed: 22.308755574s
Dec 16 14:44:01.549: INFO: Pod "pod-subpath-test-downwardapi-lth6": Phase="Running", Reason="", readiness=true. Elapsed: 24.316663978s
Dec 16 14:44:03.563: INFO: Pod "pod-subpath-test-downwardapi-lth6": Phase="Running", Reason="", readiness=true. Elapsed: 26.330780903s
Dec 16 14:44:05.575: INFO: Pod "pod-subpath-test-downwardapi-lth6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.342631424s
STEP: Saw pod success
Dec 16 14:44:05.575: INFO: Pod "pod-subpath-test-downwardapi-lth6" satisfied condition "success or failure"
Dec 16 14:44:05.580: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-lth6 container test-container-subpath-downwardapi-lth6: 
STEP: delete the pod
Dec 16 14:44:05.813: INFO: Waiting for pod pod-subpath-test-downwardapi-lth6 to disappear
Dec 16 14:44:05.818: INFO: Pod pod-subpath-test-downwardapi-lth6 no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-lth6
Dec 16 14:44:05.818: INFO: Deleting pod "pod-subpath-test-downwardapi-lth6" in namespace "subpath-4071"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:44:05.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4071" for this suite.
Dec 16 14:44:11.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:44:12.003: INFO: namespace subpath-4071 deletion completed in 6.17701402s

• [SLOW TEST:34.937 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:44:12.005: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9079.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9079.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9079.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9079.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 16 14:44:24.211: INFO: File wheezy_udp@dns-test-service-3.dns-9079.svc.cluster.local from pod  dns-9079/dns-test-07258e3c-6c07-4f9a-bcdc-75016f4dcb09 contains '' instead of 'foo.example.com.'
Dec 16 14:44:24.312: INFO: Lookups using dns-9079/dns-test-07258e3c-6c07-4f9a-bcdc-75016f4dcb09 failed for: [wheezy_udp@dns-test-service-3.dns-9079.svc.cluster.local]

Dec 16 14:44:29.410: INFO: DNS probes using dns-test-07258e3c-6c07-4f9a-bcdc-75016f4dcb09 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9079.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9079.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9079.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9079.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 16 14:44:43.657: INFO: File wheezy_udp@dns-test-service-3.dns-9079.svc.cluster.local from pod  dns-9079/dns-test-5a40e664-fbcb-475a-a1d2-ee73eea29e88 contains '' instead of 'bar.example.com.'
Dec 16 14:44:43.672: INFO: File jessie_udp@dns-test-service-3.dns-9079.svc.cluster.local from pod  dns-9079/dns-test-5a40e664-fbcb-475a-a1d2-ee73eea29e88 contains '' instead of 'bar.example.com.'
Dec 16 14:44:43.672: INFO: Lookups using dns-9079/dns-test-5a40e664-fbcb-475a-a1d2-ee73eea29e88 failed for: [wheezy_udp@dns-test-service-3.dns-9079.svc.cluster.local jessie_udp@dns-test-service-3.dns-9079.svc.cluster.local]

Dec 16 14:44:48.745: INFO: File wheezy_udp@dns-test-service-3.dns-9079.svc.cluster.local from pod  dns-9079/dns-test-5a40e664-fbcb-475a-a1d2-ee73eea29e88 contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 16 14:44:48.760: INFO: File jessie_udp@dns-test-service-3.dns-9079.svc.cluster.local from pod  dns-9079/dns-test-5a40e664-fbcb-475a-a1d2-ee73eea29e88 contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 16 14:44:48.760: INFO: Lookups using dns-9079/dns-test-5a40e664-fbcb-475a-a1d2-ee73eea29e88 failed for: [wheezy_udp@dns-test-service-3.dns-9079.svc.cluster.local jessie_udp@dns-test-service-3.dns-9079.svc.cluster.local]

Dec 16 14:44:53.688: INFO: File wheezy_udp@dns-test-service-3.dns-9079.svc.cluster.local from pod  dns-9079/dns-test-5a40e664-fbcb-475a-a1d2-ee73eea29e88 contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 16 14:44:53.695: INFO: File jessie_udp@dns-test-service-3.dns-9079.svc.cluster.local from pod  dns-9079/dns-test-5a40e664-fbcb-475a-a1d2-ee73eea29e88 contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 16 14:44:53.695: INFO: Lookups using dns-9079/dns-test-5a40e664-fbcb-475a-a1d2-ee73eea29e88 failed for: [wheezy_udp@dns-test-service-3.dns-9079.svc.cluster.local jessie_udp@dns-test-service-3.dns-9079.svc.cluster.local]

Dec 16 14:44:58.693: INFO: DNS probes using dns-test-5a40e664-fbcb-475a-a1d2-ee73eea29e88 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9079.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-9079.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9079.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-9079.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 16 14:45:15.065: INFO: File wheezy_udp@dns-test-service-3.dns-9079.svc.cluster.local from pod  dns-9079/dns-test-caa1a018-3c74-418e-8a72-b1b8af99d4d0 contains '' instead of '10.105.154.166'
Dec 16 14:45:15.074: INFO: File jessie_udp@dns-test-service-3.dns-9079.svc.cluster.local from pod  dns-9079/dns-test-caa1a018-3c74-418e-8a72-b1b8af99d4d0 contains '' instead of '10.105.154.166'
Dec 16 14:45:15.074: INFO: Lookups using dns-9079/dns-test-caa1a018-3c74-418e-8a72-b1b8af99d4d0 failed for: [wheezy_udp@dns-test-service-3.dns-9079.svc.cluster.local jessie_udp@dns-test-service-3.dns-9079.svc.cluster.local]

Dec 16 14:45:20.097: INFO: DNS probes using dns-test-caa1a018-3c74-418e-8a72-b1b8af99d4d0 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:45:20.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9079" for this suite.
Dec 16 14:45:28.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:45:28.418: INFO: namespace dns-9079 deletion completed in 8.181218279s

• [SLOW TEST:76.414 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:45:28.419: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:45:36.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9250" for this suite.
Dec 16 14:46:29.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:46:29.397: INFO: namespace kubelet-test-9250 deletion completed in 52.734746626s

• [SLOW TEST:60.978 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:46:29.398: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 16 14:46:29.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-985'
Dec 16 14:46:31.906: INFO: stderr: ""
Dec 16 14:46:31.906: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Dec 16 14:46:41.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-985 -o json'
Dec 16 14:46:42.194: INFO: stderr: ""
Dec 16 14:46:42.194: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2019-12-16T14:46:31Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-985\",\n        \"resourceVersion\": \"16901590\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-985/pods/e2e-test-nginx-pod\",\n        \"uid\": \"ad1ff3bb-0075-4e16-a177-80a744de25e3\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-wltqn\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-wltqn\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-wltqn\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-16T14:46:31Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-16T14:46:38Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-16T14:46:38Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-16T14:46:31Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://b37962973445cadf1b83afad526d972e9d4db1d813fe8501199f66683c866f2d\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2019-12-16T14:46:38Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.3.65\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2019-12-16T14:46:31Z\"\n    }\n}\n"
STEP: replace the image in the pod
Dec 16 14:46:42.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-985'
Dec 16 14:46:42.732: INFO: stderr: ""
Dec 16 14:46:42.732: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Dec 16 14:46:42.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-985'
Dec 16 14:46:49.213: INFO: stderr: ""
Dec 16 14:46:49.214: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:46:49.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-985" for this suite.
Dec 16 14:46:55.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:46:55.409: INFO: namespace kubectl-985 deletion completed in 6.175107669s

• [SLOW TEST:26.012 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:46:55.410: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Dec 16 14:46:55.506: INFO: Waiting up to 5m0s for pod "client-containers-039b4256-95e9-4c57-afbe-397fa7adbf20" in namespace "containers-2989" to be "success or failure"
Dec 16 14:46:55.517: INFO: Pod "client-containers-039b4256-95e9-4c57-afbe-397fa7adbf20": Phase="Pending", Reason="", readiness=false. Elapsed: 10.382643ms
Dec 16 14:46:57.522: INFO: Pod "client-containers-039b4256-95e9-4c57-afbe-397fa7adbf20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016233674s
Dec 16 14:46:59.540: INFO: Pod "client-containers-039b4256-95e9-4c57-afbe-397fa7adbf20": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033700082s
Dec 16 14:47:01.552: INFO: Pod "client-containers-039b4256-95e9-4c57-afbe-397fa7adbf20": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045250703s
Dec 16 14:47:03.561: INFO: Pod "client-containers-039b4256-95e9-4c57-afbe-397fa7adbf20": Phase="Pending", Reason="", readiness=false. Elapsed: 8.054551901s
Dec 16 14:47:05.773: INFO: Pod "client-containers-039b4256-95e9-4c57-afbe-397fa7adbf20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.266634849s
STEP: Saw pod success
Dec 16 14:47:05.773: INFO: Pod "client-containers-039b4256-95e9-4c57-afbe-397fa7adbf20" satisfied condition "success or failure"
Dec 16 14:47:05.784: INFO: Trying to get logs from node iruya-node pod client-containers-039b4256-95e9-4c57-afbe-397fa7adbf20 container test-container: 
STEP: delete the pod
Dec 16 14:47:05.928: INFO: Waiting for pod client-containers-039b4256-95e9-4c57-afbe-397fa7adbf20 to disappear
Dec 16 14:47:05.937: INFO: Pod client-containers-039b4256-95e9-4c57-afbe-397fa7adbf20 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:47:05.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2989" for this suite.
Dec 16 14:47:11.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:47:12.135: INFO: namespace containers-2989 deletion completed in 6.193506092s

• [SLOW TEST:16.725 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:47:12.136: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Dec 16 14:47:12.300: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:47:36.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2792" for this suite.
Dec 16 14:47:42.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:47:42.964: INFO: namespace pods-2792 deletion completed in 6.202324515s

• [SLOW TEST:30.828 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:47:42.965: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 16 14:47:43.140: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 21.161707ms)
Dec 16 14:47:43.147: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.13698ms)
Dec 16 14:47:43.153: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.762853ms)
Dec 16 14:47:43.159: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.465439ms)
Dec 16 14:47:43.166: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.37527ms)
Dec 16 14:47:43.171: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.56935ms)
Dec 16 14:47:43.178: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.302337ms)
Dec 16 14:47:43.183: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.52913ms)
Dec 16 14:47:43.190: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.31976ms)
Dec 16 14:47:43.196: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.240977ms)
Dec 16 14:47:43.201: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.678922ms)
Dec 16 14:47:43.208: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.84213ms)
Dec 16 14:47:43.215: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.904645ms)
Dec 16 14:47:43.226: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.958376ms)
Dec 16 14:47:43.234: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.848203ms)
Dec 16 14:47:43.245: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.878917ms)
Dec 16 14:47:43.253: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.929883ms)
Dec 16 14:47:43.264: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.045452ms)
Dec 16 14:47:43.271: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.429859ms)
Dec 16 14:47:43.276: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.218788ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:47:43.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4164" for this suite.
Dec 16 14:47:49.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:47:49.529: INFO: namespace proxy-4164 deletion completed in 6.249719816s

• [SLOW TEST:6.565 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:47:49.532: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 16 14:48:17.913: INFO: Container started at 2019-12-16 14:47:56 +0000 UTC, pod became ready at 2019-12-16 14:48:16 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:48:17.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9776" for this suite.
Dec 16 14:48:39.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:48:40.215: INFO: namespace container-probe-9776 deletion completed in 22.262870173s

• [SLOW TEST:50.684 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:48:40.216: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Dec 16 14:48:40.347: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-6532,SelfLink:/api/v1/namespaces/watch-6532/configmaps/e2e-watch-test-watch-closed,UID:dcca0dd6-1475-4d75-9c32-204b92867e22,ResourceVersion:16901860,Generation:0,CreationTimestamp:2019-12-16 14:48:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 16 14:48:40.347: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-6532,SelfLink:/api/v1/namespaces/watch-6532/configmaps/e2e-watch-test-watch-closed,UID:dcca0dd6-1475-4d75-9c32-204b92867e22,ResourceVersion:16901861,Generation:0,CreationTimestamp:2019-12-16 14:48:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Dec 16 14:48:40.472: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-6532,SelfLink:/api/v1/namespaces/watch-6532/configmaps/e2e-watch-test-watch-closed,UID:dcca0dd6-1475-4d75-9c32-204b92867e22,ResourceVersion:16901862,Generation:0,CreationTimestamp:2019-12-16 14:48:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 16 14:48:40.473: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-6532,SelfLink:/api/v1/namespaces/watch-6532/configmaps/e2e-watch-test-watch-closed,UID:dcca0dd6-1475-4d75-9c32-204b92867e22,ResourceVersion:16901863,Generation:0,CreationTimestamp:2019-12-16 14:48:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:48:40.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6532" for this suite.
Dec 16 14:48:46.517: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:48:46.637: INFO: namespace watch-6532 deletion completed in 6.152121191s

• [SLOW TEST:6.421 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:48:46.637: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Dec 16 14:48:46.778: INFO: Waiting up to 5m0s for pod "var-expansion-d9f69113-bfad-4a53-81d9-a5c50ed2264d" in namespace "var-expansion-3830" to be "success or failure"
Dec 16 14:48:46.781: INFO: Pod "var-expansion-d9f69113-bfad-4a53-81d9-a5c50ed2264d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.662415ms
Dec 16 14:48:48.800: INFO: Pod "var-expansion-d9f69113-bfad-4a53-81d9-a5c50ed2264d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021725843s
Dec 16 14:48:50.816: INFO: Pod "var-expansion-d9f69113-bfad-4a53-81d9-a5c50ed2264d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037518629s
Dec 16 14:48:52.824: INFO: Pod "var-expansion-d9f69113-bfad-4a53-81d9-a5c50ed2264d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045348599s
Dec 16 14:48:54.843: INFO: Pod "var-expansion-d9f69113-bfad-4a53-81d9-a5c50ed2264d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.064739206s
STEP: Saw pod success
Dec 16 14:48:54.843: INFO: Pod "var-expansion-d9f69113-bfad-4a53-81d9-a5c50ed2264d" satisfied condition "success or failure"
Dec 16 14:48:54.848: INFO: Trying to get logs from node iruya-node pod var-expansion-d9f69113-bfad-4a53-81d9-a5c50ed2264d container dapi-container: 
STEP: delete the pod
Dec 16 14:48:54.936: INFO: Waiting for pod var-expansion-d9f69113-bfad-4a53-81d9-a5c50ed2264d to disappear
Dec 16 14:48:54.969: INFO: Pod var-expansion-d9f69113-bfad-4a53-81d9-a5c50ed2264d no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:48:54.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3830" for this suite.
Dec 16 14:49:00.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:49:01.122: INFO: namespace var-expansion-3830 deletion completed in 6.148246216s

• [SLOW TEST:14.485 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:49:01.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 16 14:49:19.353: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 16 14:49:19.393: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 16 14:49:21.394: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 16 14:49:21.408: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 16 14:49:23.394: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 16 14:49:23.406: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 16 14:49:25.394: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 16 14:49:25.403: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 16 14:49:27.394: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 16 14:49:27.426: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 16 14:49:29.394: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 16 14:49:29.405: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 16 14:49:31.394: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 16 14:49:31.405: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 16 14:49:33.394: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 16 14:49:33.403: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 16 14:49:35.394: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 16 14:49:35.654: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 16 14:49:37.394: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 16 14:49:37.408: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 16 14:49:39.394: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 16 14:49:39.403: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 16 14:49:41.394: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 16 14:49:41.401: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 16 14:49:43.394: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 16 14:49:43.411: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 16 14:49:45.394: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 16 14:49:45.404: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 16 14:49:47.394: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 16 14:49:47.404: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:49:47.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7121" for this suite.
Dec 16 14:50:09.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:50:09.666: INFO: namespace container-lifecycle-hook-7121 deletion completed in 22.207018874s

• [SLOW TEST:68.544 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:50:09.667: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 16 14:50:09.840: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e517aebf-1721-456d-978e-32d17c84524e" in namespace "projected-8188" to be "success or failure"
Dec 16 14:50:09.857: INFO: Pod "downwardapi-volume-e517aebf-1721-456d-978e-32d17c84524e": Phase="Pending", Reason="", readiness=false. Elapsed: 16.696413ms
Dec 16 14:50:11.866: INFO: Pod "downwardapi-volume-e517aebf-1721-456d-978e-32d17c84524e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025931469s
Dec 16 14:50:13.877: INFO: Pod "downwardapi-volume-e517aebf-1721-456d-978e-32d17c84524e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036710064s
Dec 16 14:50:15.888: INFO: Pod "downwardapi-volume-e517aebf-1721-456d-978e-32d17c84524e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047775889s
Dec 16 14:50:17.902: INFO: Pod "downwardapi-volume-e517aebf-1721-456d-978e-32d17c84524e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061839053s
Dec 16 14:50:19.910: INFO: Pod "downwardapi-volume-e517aebf-1721-456d-978e-32d17c84524e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.069182436s
STEP: Saw pod success
Dec 16 14:50:19.910: INFO: Pod "downwardapi-volume-e517aebf-1721-456d-978e-32d17c84524e" satisfied condition "success or failure"
Dec 16 14:50:19.913: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-e517aebf-1721-456d-978e-32d17c84524e container client-container: 
STEP: delete the pod
Dec 16 14:50:19.970: INFO: Waiting for pod downwardapi-volume-e517aebf-1721-456d-978e-32d17c84524e to disappear
Dec 16 14:50:19.988: INFO: Pod downwardapi-volume-e517aebf-1721-456d-978e-32d17c84524e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:50:19.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8188" for this suite.
Dec 16 14:50:26.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:50:26.132: INFO: namespace projected-8188 deletion completed in 6.13695002s

• [SLOW TEST:16.465 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:50:26.133: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:50:32.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-5964" for this suite.
Dec 16 14:50:38.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:50:38.750: INFO: namespace namespaces-5964 deletion completed in 6.200475722s
STEP: Destroying namespace "nsdeletetest-5034" for this suite.
Dec 16 14:50:38.757: INFO: Namespace nsdeletetest-5034 was already deleted
STEP: Destroying namespace "nsdeletetest-3967" for this suite.
Dec 16 14:50:44.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:50:44.927: INFO: namespace nsdeletetest-3967 deletion completed in 6.169646919s

• [SLOW TEST:18.794 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:50:44.928: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-765584ea-e3f4-4ca7-b431-da269b667a58 in namespace container-probe-7510
Dec 16 14:50:55.043: INFO: Started pod test-webserver-765584ea-e3f4-4ca7-b431-da269b667a58 in namespace container-probe-7510
STEP: checking the pod's current state and verifying that restartCount is present
Dec 16 14:50:55.050: INFO: Initial restart count of pod test-webserver-765584ea-e3f4-4ca7-b431-da269b667a58 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:54:57.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7510" for this suite.
Dec 16 14:55:03.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:55:03.471: INFO: namespace container-probe-7510 deletion completed in 6.226302451s

• [SLOW TEST:258.543 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:55:03.471: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-4113
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 16 14:55:03.573: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 16 14:55:39.998: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4113 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 16 14:55:39.998: INFO: >>> kubeConfig: /root/.kube/config
Dec 16 14:55:41.489: INFO: Found all expected endpoints: [netserver-0]
Dec 16 14:55:41.500: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4113 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 16 14:55:41.500: INFO: >>> kubeConfig: /root/.kube/config
Dec 16 14:55:42.883: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:55:42.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-4113" for this suite.
Dec 16 14:56:09.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:56:09.160: INFO: namespace pod-network-test-4113 deletion completed in 26.244745128s

• [SLOW TEST:65.689 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:56:09.161: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-6b215b28-4328-47e0-9a8d-9cd3fe9dad3e
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:56:21.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1128" for this suite.
Dec 16 14:56:53.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:56:53.689: INFO: namespace configmap-1128 deletion completed in 32.245600146s

• [SLOW TEST:44.529 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:56:53.692: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-ce2e1b50-3576-4d37-8795-6222e5040002
STEP: Creating a pod to test consume secrets
Dec 16 14:56:53.903: INFO: Waiting up to 5m0s for pod "pod-secrets-9fe236c5-54de-4514-a814-5221d67d3ac2" in namespace "secrets-6795" to be "success or failure"
Dec 16 14:56:53.931: INFO: Pod "pod-secrets-9fe236c5-54de-4514-a814-5221d67d3ac2": Phase="Pending", Reason="", readiness=false. Elapsed: 27.379786ms
Dec 16 14:56:55.940: INFO: Pod "pod-secrets-9fe236c5-54de-4514-a814-5221d67d3ac2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036413924s
Dec 16 14:56:57.946: INFO: Pod "pod-secrets-9fe236c5-54de-4514-a814-5221d67d3ac2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042308508s
Dec 16 14:56:59.955: INFO: Pod "pod-secrets-9fe236c5-54de-4514-a814-5221d67d3ac2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051865574s
Dec 16 14:57:01.966: INFO: Pod "pod-secrets-9fe236c5-54de-4514-a814-5221d67d3ac2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062497723s
Dec 16 14:57:03.976: INFO: Pod "pod-secrets-9fe236c5-54de-4514-a814-5221d67d3ac2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.072710472s
STEP: Saw pod success
Dec 16 14:57:03.976: INFO: Pod "pod-secrets-9fe236c5-54de-4514-a814-5221d67d3ac2" satisfied condition "success or failure"
Dec 16 14:57:03.986: INFO: Trying to get logs from node iruya-node pod pod-secrets-9fe236c5-54de-4514-a814-5221d67d3ac2 container secret-volume-test: 
STEP: delete the pod
Dec 16 14:57:04.057: INFO: Waiting for pod pod-secrets-9fe236c5-54de-4514-a814-5221d67d3ac2 to disappear
Dec 16 14:57:04.076: INFO: Pod pod-secrets-9fe236c5-54de-4514-a814-5221d67d3ac2 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 14:57:04.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6795" for this suite.
Dec 16 14:57:10.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 14:57:10.229: INFO: namespace secrets-6795 deletion completed in 6.147377171s

• [SLOW TEST:16.538 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 14:57:10.230: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-9269
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Dec 16 14:57:10.366: INFO: Found 0 stateful pods, waiting for 3
Dec 16 14:57:20.376: INFO: Found 2 stateful pods, waiting for 3
Dec 16 14:57:30.380: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 16 14:57:30.381: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 16 14:57:30.381: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 16 14:57:40.379: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 16 14:57:40.379: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 16 14:57:40.379: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Dec 16 14:57:40.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9269 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 16 14:57:43.264: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 16 14:57:43.264: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 16 14:57:43.264: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Dec 16 14:57:53.369: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Dec 16 14:58:03.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9269 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 14:58:03.922: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 16 14:58:03.922: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 16 14:58:03.922: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 16 14:58:13.994: INFO: Waiting for StatefulSet statefulset-9269/ss2 to complete update
Dec 16 14:58:13.994: INFO: Waiting for Pod statefulset-9269/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 16 14:58:13.994: INFO: Waiting for Pod statefulset-9269/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 16 14:58:13.994: INFO: Waiting for Pod statefulset-9269/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 16 14:58:24.021: INFO: Waiting for StatefulSet statefulset-9269/ss2 to complete update
Dec 16 14:58:24.021: INFO: Waiting for Pod statefulset-9269/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 16 14:58:24.021: INFO: Waiting for Pod statefulset-9269/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 16 14:58:34.009: INFO: Waiting for StatefulSet statefulset-9269/ss2 to complete update
Dec 16 14:58:34.009: INFO: Waiting for Pod statefulset-9269/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 16 14:58:34.009: INFO: Waiting for Pod statefulset-9269/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 16 14:58:44.055: INFO: Waiting for StatefulSet statefulset-9269/ss2 to complete update
Dec 16 14:58:44.055: INFO: Waiting for Pod statefulset-9269/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 16 14:58:54.018: INFO: Waiting for StatefulSet statefulset-9269/ss2 to complete update
STEP: Rolling back to a previous revision
Dec 16 14:59:04.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9269 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 16 14:59:04.618: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 16 14:59:04.619: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 16 14:59:04.619: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 16 14:59:14.714: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Dec 16 14:59:24.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9269 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 14:59:25.179: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 16 14:59:25.179: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 16 14:59:25.179: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 16 14:59:35.221: INFO: Waiting for StatefulSet statefulset-9269/ss2 to complete update
Dec 16 14:59:35.221: INFO: Waiting for Pod statefulset-9269/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 16 14:59:35.221: INFO: Waiting for Pod statefulset-9269/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 16 14:59:45.233: INFO: Waiting for StatefulSet statefulset-9269/ss2 to complete update
Dec 16 14:59:45.233: INFO: Waiting for Pod statefulset-9269/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 16 14:59:45.233: INFO: Waiting for Pod statefulset-9269/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 16 14:59:55.293: INFO: Waiting for StatefulSet statefulset-9269/ss2 to complete update
Dec 16 14:59:55.293: INFO: Waiting for Pod statefulset-9269/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 16 15:00:05.239: INFO: Waiting for StatefulSet statefulset-9269/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 16 15:00:15.240: INFO: Deleting all statefulset in ns statefulset-9269
Dec 16 15:00:15.245: INFO: Scaling statefulset ss2 to 0
Dec 16 15:00:45.331: INFO: Waiting for statefulset status.replicas updated to 0
Dec 16 15:00:45.336: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 15:00:45.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9269" for this suite.
Dec 16 15:00:53.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 15:00:53.672: INFO: namespace statefulset-9269 deletion completed in 8.26921066s

• [SLOW TEST:223.441 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 15:00:53.674: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-73d30b4c-d83e-494a-9476-6c656375348c
STEP: Creating secret with name s-test-opt-upd-ac7bf27c-eaf5-4cd4-ae07-d599a0f127a1
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-73d30b4c-d83e-494a-9476-6c656375348c
STEP: Updating secret s-test-opt-upd-ac7bf27c-eaf5-4cd4-ae07-d599a0f127a1
STEP: Creating secret with name s-test-opt-create-78c6b1ac-2878-4b59-aaba-325ccabffc63
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 15:01:08.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9832" for this suite.
Dec 16 15:01:30.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 15:01:30.256: INFO: namespace projected-9832 deletion completed in 22.176153109s

• [SLOW TEST:36.583 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 15:01:30.257: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 15:01:30.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8682" for this suite.
Dec 16 15:01:36.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 15:01:36.639: INFO: namespace kubelet-test-8682 deletion completed in 6.186579474s

• [SLOW TEST:6.383 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 15:01:36.640: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-573e4b13-c302-41e0-8441-45ead3e7c6c3
STEP: Creating secret with name secret-projected-all-test-volume-73fe3d25-09dc-4cfc-bb19-f23843972406
STEP: Creating a pod to test Check all projections for projected volume plugin
Dec 16 15:01:36.746: INFO: Waiting up to 5m0s for pod "projected-volume-95fcf7a8-a82b-41ae-9d40-79b6d6e95a21" in namespace "projected-9886" to be "success or failure"
Dec 16 15:01:36.777: INFO: Pod "projected-volume-95fcf7a8-a82b-41ae-9d40-79b6d6e95a21": Phase="Pending", Reason="", readiness=false. Elapsed: 30.44468ms
Dec 16 15:01:38.791: INFO: Pod "projected-volume-95fcf7a8-a82b-41ae-9d40-79b6d6e95a21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044068738s
Dec 16 15:01:40.798: INFO: Pod "projected-volume-95fcf7a8-a82b-41ae-9d40-79b6d6e95a21": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051105831s
Dec 16 15:01:42.813: INFO: Pod "projected-volume-95fcf7a8-a82b-41ae-9d40-79b6d6e95a21": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066946409s
Dec 16 15:01:44.825: INFO: Pod "projected-volume-95fcf7a8-a82b-41ae-9d40-79b6d6e95a21": Phase="Pending", Reason="", readiness=false. Elapsed: 8.078371794s
Dec 16 15:01:46.832: INFO: Pod "projected-volume-95fcf7a8-a82b-41ae-9d40-79b6d6e95a21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.085481011s
STEP: Saw pod success
Dec 16 15:01:46.832: INFO: Pod "projected-volume-95fcf7a8-a82b-41ae-9d40-79b6d6e95a21" satisfied condition "success or failure"
Dec 16 15:01:46.835: INFO: Trying to get logs from node iruya-node pod projected-volume-95fcf7a8-a82b-41ae-9d40-79b6d6e95a21 container projected-all-volume-test: 
STEP: delete the pod
Dec 16 15:01:46.929: INFO: Waiting for pod projected-volume-95fcf7a8-a82b-41ae-9d40-79b6d6e95a21 to disappear
Dec 16 15:01:46.938: INFO: Pod projected-volume-95fcf7a8-a82b-41ae-9d40-79b6d6e95a21 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 15:01:46.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9886" for this suite.
Dec 16 15:01:52.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 15:01:53.051: INFO: namespace projected-9886 deletion completed in 6.104003596s

• [SLOW TEST:16.411 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 15:01:53.052: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-09cee1ef-54ef-4c19-b486-d0be061c05aa
STEP: Creating a pod to test consume secrets
Dec 16 15:01:53.185: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1e6dc435-15ce-4f8d-95d0-91a2d2710c03" in namespace "projected-4129" to be "success or failure"
Dec 16 15:01:53.189: INFO: Pod "pod-projected-secrets-1e6dc435-15ce-4f8d-95d0-91a2d2710c03": Phase="Pending", Reason="", readiness=false. Elapsed: 3.924724ms
Dec 16 15:01:55.225: INFO: Pod "pod-projected-secrets-1e6dc435-15ce-4f8d-95d0-91a2d2710c03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040026983s
Dec 16 15:01:57.234: INFO: Pod "pod-projected-secrets-1e6dc435-15ce-4f8d-95d0-91a2d2710c03": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049601869s
Dec 16 15:01:59.239: INFO: Pod "pod-projected-secrets-1e6dc435-15ce-4f8d-95d0-91a2d2710c03": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054060586s
Dec 16 15:02:01.300: INFO: Pod "pod-projected-secrets-1e6dc435-15ce-4f8d-95d0-91a2d2710c03": Phase="Pending", Reason="", readiness=false. Elapsed: 8.115695831s
Dec 16 15:02:03.361: INFO: Pod "pod-projected-secrets-1e6dc435-15ce-4f8d-95d0-91a2d2710c03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.17580792s
STEP: Saw pod success
Dec 16 15:02:03.361: INFO: Pod "pod-projected-secrets-1e6dc435-15ce-4f8d-95d0-91a2d2710c03" satisfied condition "success or failure"
Dec 16 15:02:03.368: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-1e6dc435-15ce-4f8d-95d0-91a2d2710c03 container secret-volume-test: 
STEP: delete the pod
Dec 16 15:02:03.662: INFO: Waiting for pod pod-projected-secrets-1e6dc435-15ce-4f8d-95d0-91a2d2710c03 to disappear
Dec 16 15:02:03.667: INFO: Pod pod-projected-secrets-1e6dc435-15ce-4f8d-95d0-91a2d2710c03 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 15:02:03.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4129" for this suite.
Dec 16 15:02:09.711: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 15:02:09.817: INFO: namespace projected-4129 deletion completed in 6.144512853s

• [SLOW TEST:16.765 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 15:02:09.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Dec 16 15:02:18.568: INFO: Successfully updated pod "pod-update-activedeadlineseconds-c20f0d65-b4fb-4f3d-8862-0edd1eb4d6bc"
Dec 16 15:02:18.568: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-c20f0d65-b4fb-4f3d-8862-0edd1eb4d6bc" in namespace "pods-6139" to be "terminated due to deadline exceeded"
Dec 16 15:02:18.577: INFO: Pod "pod-update-activedeadlineseconds-c20f0d65-b4fb-4f3d-8862-0edd1eb4d6bc": Phase="Running", Reason="", readiness=true. Elapsed: 9.202452ms
Dec 16 15:02:20.599: INFO: Pod "pod-update-activedeadlineseconds-c20f0d65-b4fb-4f3d-8862-0edd1eb4d6bc": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.030377772s
Dec 16 15:02:20.599: INFO: Pod "pod-update-activedeadlineseconds-c20f0d65-b4fb-4f3d-8862-0edd1eb4d6bc" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 15:02:20.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6139" for this suite.
Dec 16 15:02:26.634: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 15:02:26.728: INFO: namespace pods-6139 deletion completed in 6.116223181s

• [SLOW TEST:16.911 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 15:02:26.729: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-07bb0f4f-28a9-4efd-b904-49755e5c423b
STEP: Creating a pod to test consume secrets
Dec 16 15:02:26.883: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ab3205e2-417e-4d53-9270-bad44c6dab55" in namespace "projected-6793" to be "success or failure"
Dec 16 15:02:26.895: INFO: Pod "pod-projected-secrets-ab3205e2-417e-4d53-9270-bad44c6dab55": Phase="Pending", Reason="", readiness=false. Elapsed: 11.356129ms
Dec 16 15:02:28.905: INFO: Pod "pod-projected-secrets-ab3205e2-417e-4d53-9270-bad44c6dab55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021122138s
Dec 16 15:02:31.014: INFO: Pod "pod-projected-secrets-ab3205e2-417e-4d53-9270-bad44c6dab55": Phase="Pending", Reason="", readiness=false. Elapsed: 4.130147705s
Dec 16 15:02:33.033: INFO: Pod "pod-projected-secrets-ab3205e2-417e-4d53-9270-bad44c6dab55": Phase="Pending", Reason="", readiness=false. Elapsed: 6.148945949s
Dec 16 15:02:35.041: INFO: Pod "pod-projected-secrets-ab3205e2-417e-4d53-9270-bad44c6dab55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.157075915s
STEP: Saw pod success
Dec 16 15:02:35.041: INFO: Pod "pod-projected-secrets-ab3205e2-417e-4d53-9270-bad44c6dab55" satisfied condition "success or failure"
Dec 16 15:02:35.046: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-ab3205e2-417e-4d53-9270-bad44c6dab55 container projected-secret-volume-test: 
STEP: delete the pod
Dec 16 15:02:35.102: INFO: Waiting for pod pod-projected-secrets-ab3205e2-417e-4d53-9270-bad44c6dab55 to disappear
Dec 16 15:02:35.112: INFO: Pod pod-projected-secrets-ab3205e2-417e-4d53-9270-bad44c6dab55 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 15:02:35.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6793" for this suite.
Dec 16 15:02:41.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 15:02:41.323: INFO: namespace projected-6793 deletion completed in 6.197338416s

• [SLOW TEST:14.595 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 15:02:41.324: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-9cbc84f7-7553-4130-a850-dd91454b3923
STEP: Creating a pod to test consume secrets
Dec 16 15:02:41.542: INFO: Waiting up to 5m0s for pod "pod-secrets-8c8940db-946f-438a-828a-94cbc87d36ef" in namespace "secrets-3021" to be "success or failure"
Dec 16 15:02:41.633: INFO: Pod "pod-secrets-8c8940db-946f-438a-828a-94cbc87d36ef": Phase="Pending", Reason="", readiness=false. Elapsed: 91.179106ms
Dec 16 15:02:43.648: INFO: Pod "pod-secrets-8c8940db-946f-438a-828a-94cbc87d36ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106225455s
Dec 16 15:02:45.658: INFO: Pod "pod-secrets-8c8940db-946f-438a-828a-94cbc87d36ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116202351s
Dec 16 15:02:47.676: INFO: Pod "pod-secrets-8c8940db-946f-438a-828a-94cbc87d36ef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.134613003s
Dec 16 15:02:49.686: INFO: Pod "pod-secrets-8c8940db-946f-438a-828a-94cbc87d36ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.143964005s
STEP: Saw pod success
Dec 16 15:02:49.686: INFO: Pod "pod-secrets-8c8940db-946f-438a-828a-94cbc87d36ef" satisfied condition "success or failure"
Dec 16 15:02:49.691: INFO: Trying to get logs from node iruya-node pod pod-secrets-8c8940db-946f-438a-828a-94cbc87d36ef container secret-volume-test: 
STEP: delete the pod
Dec 16 15:02:49.837: INFO: Waiting for pod pod-secrets-8c8940db-946f-438a-828a-94cbc87d36ef to disappear
Dec 16 15:02:49.856: INFO: Pod pod-secrets-8c8940db-946f-438a-828a-94cbc87d36ef no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 15:02:49.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3021" for this suite.
Dec 16 15:02:55.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 15:02:56.100: INFO: namespace secrets-3021 deletion completed in 6.223587363s

• [SLOW TEST:14.777 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 15:02:56.101: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7727.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-7727.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7727.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7727.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-7727.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7727.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 16 15:03:08.463: INFO: Unable to read wheezy_udp@PodARecord from pod dns-7727/dns-test-59ffccbb-7e5c-4feb-a013-c4c95b92f4e8: the server could not find the requested resource (get pods dns-test-59ffccbb-7e5c-4feb-a013-c4c95b92f4e8)
Dec 16 15:03:08.489: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7727/dns-test-59ffccbb-7e5c-4feb-a013-c4c95b92f4e8: the server could not find the requested resource (get pods dns-test-59ffccbb-7e5c-4feb-a013-c4c95b92f4e8)
Dec 16 15:03:08.541: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-7727.svc.cluster.local from pod dns-7727/dns-test-59ffccbb-7e5c-4feb-a013-c4c95b92f4e8: the server could not find the requested resource (get pods dns-test-59ffccbb-7e5c-4feb-a013-c4c95b92f4e8)
Dec 16 15:03:08.559: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-7727/dns-test-59ffccbb-7e5c-4feb-a013-c4c95b92f4e8: the server could not find the requested resource (get pods dns-test-59ffccbb-7e5c-4feb-a013-c4c95b92f4e8)
Dec 16 15:03:08.566: INFO: Unable to read jessie_udp@PodARecord from pod dns-7727/dns-test-59ffccbb-7e5c-4feb-a013-c4c95b92f4e8: the server could not find the requested resource (get pods dns-test-59ffccbb-7e5c-4feb-a013-c4c95b92f4e8)
Dec 16 15:03:08.571: INFO: Unable to read jessie_tcp@PodARecord from pod dns-7727/dns-test-59ffccbb-7e5c-4feb-a013-c4c95b92f4e8: the server could not find the requested resource (get pods dns-test-59ffccbb-7e5c-4feb-a013-c4c95b92f4e8)
Dec 16 15:03:08.571: INFO: Lookups using dns-7727/dns-test-59ffccbb-7e5c-4feb-a013-c4c95b92f4e8 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-7727.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Dec 16 15:03:13.647: INFO: DNS probes using dns-7727/dns-test-59ffccbb-7e5c-4feb-a013-c4c95b92f4e8 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 15:03:13.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7727" for this suite.
Dec 16 15:03:21.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 15:03:21.943: INFO: namespace dns-7727 deletion completed in 8.235910043s

• [SLOW TEST:25.842 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 15:03:21.943: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 16 15:03:22.108: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 15:03:23.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-9731" for this suite.
Dec 16 15:03:29.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 15:03:29.423: INFO: namespace custom-resource-definition-9731 deletion completed in 6.169570395s

• [SLOW TEST:7.480 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 15:03:29.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Dec 16 15:03:29.543: INFO: Waiting up to 5m0s for pod "client-containers-5485bf65-0a32-4af1-9947-ce20c6993476" in namespace "containers-1755" to be "success or failure"
Dec 16 15:03:29.554: INFO: Pod "client-containers-5485bf65-0a32-4af1-9947-ce20c6993476": Phase="Pending", Reason="", readiness=false. Elapsed: 10.94581ms
Dec 16 15:03:31.566: INFO: Pod "client-containers-5485bf65-0a32-4af1-9947-ce20c6993476": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023251094s
Dec 16 15:03:33.604: INFO: Pod "client-containers-5485bf65-0a32-4af1-9947-ce20c6993476": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06105147s
Dec 16 15:03:35.620: INFO: Pod "client-containers-5485bf65-0a32-4af1-9947-ce20c6993476": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076820016s
Dec 16 15:03:37.639: INFO: Pod "client-containers-5485bf65-0a32-4af1-9947-ce20c6993476": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.096006721s
STEP: Saw pod success
Dec 16 15:03:37.639: INFO: Pod "client-containers-5485bf65-0a32-4af1-9947-ce20c6993476" satisfied condition "success or failure"
Dec 16 15:03:37.643: INFO: Trying to get logs from node iruya-node pod client-containers-5485bf65-0a32-4af1-9947-ce20c6993476 container test-container: 
STEP: delete the pod
Dec 16 15:03:37.705: INFO: Waiting for pod client-containers-5485bf65-0a32-4af1-9947-ce20c6993476 to disappear
Dec 16 15:03:37.714: INFO: Pod client-containers-5485bf65-0a32-4af1-9947-ce20c6993476 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 15:03:37.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1755" for this suite.
Dec 16 15:03:43.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 15:03:43.967: INFO: namespace containers-1755 deletion completed in 6.247945258s

• [SLOW TEST:14.544 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 15:03:43.968: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Dec 16 15:03:44.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Dec 16 15:03:44.197: INFO: stderr: ""
Dec 16 15:03:44.198: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 15:03:44.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4857" for this suite.
Dec 16 15:03:50.225: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 15:03:50.359: INFO: namespace kubectl-4857 deletion completed in 6.156679744s

• [SLOW TEST:6.391 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 15:03:50.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-2852
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 16 15:03:50.446: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 16 15:04:26.640: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-2852 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 16 15:04:26.640: INFO: >>> kubeConfig: /root/.kube/config
Dec 16 15:04:27.422: INFO: Waiting for endpoints: map[]
Dec 16 15:04:27.441: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-2852 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 16 15:04:27.441: INFO: >>> kubeConfig: /root/.kube/config
Dec 16 15:04:27.794: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 15:04:27.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-2852" for this suite.
Dec 16 15:04:51.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 15:04:52.074: INFO: namespace pod-network-test-2852 deletion completed in 24.267639016s

• [SLOW TEST:61.714 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 15:04:52.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 pods, got 2 pods
STEP: expected 0 rs, got 1 rs
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W1216 15:04:55.956025       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 16 15:04:55.956: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 15:04:55.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7977" for this suite.
Dec 16 15:05:01.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 15:05:02.132: INFO: namespace gc-7977 deletion completed in 6.172479105s

• [SLOW TEST:10.057 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 15:05:02.133: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Dec 16 15:05:18.437: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 16 15:05:18.449: INFO: Pod pod-with-poststart-http-hook still exists
Dec 16 15:05:20.449: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 16 15:05:20.460: INFO: Pod pod-with-poststart-http-hook still exists
Dec 16 15:05:22.450: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 16 15:05:22.460: INFO: Pod pod-with-poststart-http-hook still exists
Dec 16 15:05:24.450: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 16 15:05:24.465: INFO: Pod pod-with-poststart-http-hook still exists
Dec 16 15:05:26.450: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 16 15:05:26.458: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 15:05:26.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-5070" for this suite.
Dec 16 15:05:48.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 15:05:48.633: INFO: namespace container-lifecycle-hook-5070 deletion completed in 22.166522003s

• [SLOW TEST:46.500 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 15:05:48.634: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Dec 16 15:05:48.742: INFO: Waiting up to 5m0s for pod "pod-d937657c-8282-4830-aa0e-63a098e16c6d" in namespace "emptydir-7025" to be "success or failure"
Dec 16 15:05:48.769: INFO: Pod "pod-d937657c-8282-4830-aa0e-63a098e16c6d": Phase="Pending", Reason="", readiness=false. Elapsed: 27.026696ms
Dec 16 15:05:50.789: INFO: Pod "pod-d937657c-8282-4830-aa0e-63a098e16c6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047112053s
Dec 16 15:05:52.801: INFO: Pod "pod-d937657c-8282-4830-aa0e-63a098e16c6d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05925849s
Dec 16 15:05:54.810: INFO: Pod "pod-d937657c-8282-4830-aa0e-63a098e16c6d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068029455s
Dec 16 15:05:56.831: INFO: Pod "pod-d937657c-8282-4830-aa0e-63a098e16c6d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.089327533s
Dec 16 15:05:58.840: INFO: Pod "pod-d937657c-8282-4830-aa0e-63a098e16c6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.098697392s
STEP: Saw pod success
Dec 16 15:05:58.841: INFO: Pod "pod-d937657c-8282-4830-aa0e-63a098e16c6d" satisfied condition "success or failure"
Dec 16 15:05:58.844: INFO: Trying to get logs from node iruya-node pod pod-d937657c-8282-4830-aa0e-63a098e16c6d container test-container: 
STEP: delete the pod
Dec 16 15:05:58.903: INFO: Waiting for pod pod-d937657c-8282-4830-aa0e-63a098e16c6d to disappear
Dec 16 15:05:58.952: INFO: Pod pod-d937657c-8282-4830-aa0e-63a098e16c6d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 15:05:58.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7025" for this suite.
Dec 16 15:06:04.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 15:06:05.099: INFO: namespace emptydir-7025 deletion completed in 6.139245114s

• [SLOW TEST:16.465 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 15:06:05.100: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-15d84553-2a9f-4c81-94be-c53c11a919f1
STEP: Creating a pod to test consume secrets
Dec 16 15:06:05.258: INFO: Waiting up to 5m0s for pod "pod-secrets-83c92b1b-af83-4956-a1d6-7595d379526c" in namespace "secrets-9442" to be "success or failure"
Dec 16 15:06:05.264: INFO: Pod "pod-secrets-83c92b1b-af83-4956-a1d6-7595d379526c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079811ms
Dec 16 15:06:07.687: INFO: Pod "pod-secrets-83c92b1b-af83-4956-a1d6-7595d379526c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.429184304s
Dec 16 15:06:09.696: INFO: Pod "pod-secrets-83c92b1b-af83-4956-a1d6-7595d379526c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.437802823s
Dec 16 15:06:11.710: INFO: Pod "pod-secrets-83c92b1b-af83-4956-a1d6-7595d379526c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.451961113s
Dec 16 15:06:13.718: INFO: Pod "pod-secrets-83c92b1b-af83-4956-a1d6-7595d379526c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.460334122s
STEP: Saw pod success
Dec 16 15:06:13.718: INFO: Pod "pod-secrets-83c92b1b-af83-4956-a1d6-7595d379526c" satisfied condition "success or failure"
Dec 16 15:06:13.723: INFO: Trying to get logs from node iruya-node pod pod-secrets-83c92b1b-af83-4956-a1d6-7595d379526c container secret-volume-test: 
STEP: delete the pod
Dec 16 15:06:13.788: INFO: Waiting for pod pod-secrets-83c92b1b-af83-4956-a1d6-7595d379526c to disappear
Dec 16 15:06:13.802: INFO: Pod pod-secrets-83c92b1b-af83-4956-a1d6-7595d379526c no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 15:06:13.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9442" for this suite.
Dec 16 15:06:19.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 15:06:19.983: INFO: namespace secrets-9442 deletion completed in 6.173704779s

• [SLOW TEST:14.883 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 15:06:19.984: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-40649c99-3b03-43e4-bc08-a7ae0f94c835
STEP: Creating a pod to test consume configMaps
Dec 16 15:06:20.264: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1ca12138-1d20-4159-ab7c-6a14c327d907" in namespace "projected-3758" to be "success or failure"
Dec 16 15:06:20.274: INFO: Pod "pod-projected-configmaps-1ca12138-1d20-4159-ab7c-6a14c327d907": Phase="Pending", Reason="", readiness=false. Elapsed: 10.733291ms
Dec 16 15:06:22.280: INFO: Pod "pod-projected-configmaps-1ca12138-1d20-4159-ab7c-6a14c327d907": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016496059s
Dec 16 15:06:24.284: INFO: Pod "pod-projected-configmaps-1ca12138-1d20-4159-ab7c-6a14c327d907": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020710067s
Dec 16 15:06:26.293: INFO: Pod "pod-projected-configmaps-1ca12138-1d20-4159-ab7c-6a14c327d907": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029360684s
Dec 16 15:06:28.319: INFO: Pod "pod-projected-configmaps-1ca12138-1d20-4159-ab7c-6a14c327d907": Phase="Pending", Reason="", readiness=false. Elapsed: 8.055191306s
Dec 16 15:06:30.342: INFO: Pod "pod-projected-configmaps-1ca12138-1d20-4159-ab7c-6a14c327d907": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.078715468s
STEP: Saw pod success
Dec 16 15:06:30.343: INFO: Pod "pod-projected-configmaps-1ca12138-1d20-4159-ab7c-6a14c327d907" satisfied condition "success or failure"
Dec 16 15:06:30.368: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-1ca12138-1d20-4159-ab7c-6a14c327d907 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 16 15:06:30.459: INFO: Waiting for pod pod-projected-configmaps-1ca12138-1d20-4159-ab7c-6a14c327d907 to disappear
Dec 16 15:06:30.523: INFO: Pod pod-projected-configmaps-1ca12138-1d20-4159-ab7c-6a14c327d907 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 15:06:30.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3758" for this suite.
Dec 16 15:06:36.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 15:06:36.670: INFO: namespace projected-3758 deletion completed in 6.136126648s

• [SLOW TEST:16.686 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 15:06:36.670: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 16 15:06:36.825: INFO: Waiting up to 5m0s for pod "downwardapi-volume-188bef25-b052-47d5-9226-d5961614101c" in namespace "projected-5212" to be "success or failure"
Dec 16 15:06:36.830: INFO: Pod "downwardapi-volume-188bef25-b052-47d5-9226-d5961614101c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.545935ms
Dec 16 15:06:38.844: INFO: Pod "downwardapi-volume-188bef25-b052-47d5-9226-d5961614101c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018878382s
Dec 16 15:06:40.851: INFO: Pod "downwardapi-volume-188bef25-b052-47d5-9226-d5961614101c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026115933s
Dec 16 15:06:42.877: INFO: Pod "downwardapi-volume-188bef25-b052-47d5-9226-d5961614101c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052006228s
Dec 16 15:06:44.896: INFO: Pod "downwardapi-volume-188bef25-b052-47d5-9226-d5961614101c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.071229709s
Dec 16 15:06:46.936: INFO: Pod "downwardapi-volume-188bef25-b052-47d5-9226-d5961614101c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.110781247s
STEP: Saw pod success
Dec 16 15:06:46.936: INFO: Pod "downwardapi-volume-188bef25-b052-47d5-9226-d5961614101c" satisfied condition "success or failure"
Dec 16 15:06:47.002: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-188bef25-b052-47d5-9226-d5961614101c container client-container: 
STEP: delete the pod
Dec 16 15:06:47.079: INFO: Waiting for pod downwardapi-volume-188bef25-b052-47d5-9226-d5961614101c to disappear
Dec 16 15:06:47.177: INFO: Pod downwardapi-volume-188bef25-b052-47d5-9226-d5961614101c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 15:06:47.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5212" for this suite.
Dec 16 15:06:53.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 15:06:53.340: INFO: namespace projected-5212 deletion completed in 6.15460724s

• [SLOW TEST:16.670 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 15:06:53.340: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 16 15:06:53.495: INFO: Waiting up to 5m0s for pod "downwardapi-volume-94227407-8297-4579-83e4-e4b9c4fb2dde" in namespace "downward-api-9488" to be "success or failure"
Dec 16 15:06:53.514: INFO: Pod "downwardapi-volume-94227407-8297-4579-83e4-e4b9c4fb2dde": Phase="Pending", Reason="", readiness=false. Elapsed: 18.469562ms
Dec 16 15:06:55.523: INFO: Pod "downwardapi-volume-94227407-8297-4579-83e4-e4b9c4fb2dde": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027210061s
Dec 16 15:06:57.532: INFO: Pod "downwardapi-volume-94227407-8297-4579-83e4-e4b9c4fb2dde": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03623534s
Dec 16 15:06:59.541: INFO: Pod "downwardapi-volume-94227407-8297-4579-83e4-e4b9c4fb2dde": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045553031s
Dec 16 15:07:01.557: INFO: Pod "downwardapi-volume-94227407-8297-4579-83e4-e4b9c4fb2dde": Phase="Running", Reason="", readiness=true. Elapsed: 8.061001048s
Dec 16 15:07:03.564: INFO: Pod "downwardapi-volume-94227407-8297-4579-83e4-e4b9c4fb2dde": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068473909s
STEP: Saw pod success
Dec 16 15:07:03.564: INFO: Pod "downwardapi-volume-94227407-8297-4579-83e4-e4b9c4fb2dde" satisfied condition "success or failure"
Dec 16 15:07:03.569: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-94227407-8297-4579-83e4-e4b9c4fb2dde container client-container: 
STEP: delete the pod
Dec 16 15:07:03.674: INFO: Waiting for pod downwardapi-volume-94227407-8297-4579-83e4-e4b9c4fb2dde to disappear
Dec 16 15:07:03.686: INFO: Pod downwardapi-volume-94227407-8297-4579-83e4-e4b9c4fb2dde no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 15:07:03.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9488" for this suite.
Dec 16 15:07:09.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 15:07:09.948: INFO: namespace downward-api-9488 deletion completed in 6.16253858s

• [SLOW TEST:16.608 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 15:07:09.949: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Dec 16 15:07:20.651: INFO: Successfully updated pod "pod-update-74e43336-b29e-463d-baae-296aa1f819a7"
STEP: verifying the updated pod is in kubernetes
Dec 16 15:07:20.750: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 15:07:20.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5643" for this suite.
Dec 16 15:07:44.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 15:07:44.947: INFO: namespace pods-5643 deletion completed in 24.189339391s

• [SLOW TEST:34.999 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 15:07:44.948: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 16 15:07:45.072: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ade85991-5307-447d-939d-2da1d0d87c3e" in namespace "downward-api-5562" to be "success or failure"
Dec 16 15:07:45.103: INFO: Pod "downwardapi-volume-ade85991-5307-447d-939d-2da1d0d87c3e": Phase="Pending", Reason="", readiness=false. Elapsed: 31.459461ms
Dec 16 15:07:47.120: INFO: Pod "downwardapi-volume-ade85991-5307-447d-939d-2da1d0d87c3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048233089s
Dec 16 15:07:49.125: INFO: Pod "downwardapi-volume-ade85991-5307-447d-939d-2da1d0d87c3e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05350751s
Dec 16 15:07:51.136: INFO: Pod "downwardapi-volume-ade85991-5307-447d-939d-2da1d0d87c3e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0645897s
Dec 16 15:07:53.168: INFO: Pod "downwardapi-volume-ade85991-5307-447d-939d-2da1d0d87c3e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.096473205s
Dec 16 15:07:55.178: INFO: Pod "downwardapi-volume-ade85991-5307-447d-939d-2da1d0d87c3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.106732336s
STEP: Saw pod success
Dec 16 15:07:55.179: INFO: Pod "downwardapi-volume-ade85991-5307-447d-939d-2da1d0d87c3e" satisfied condition "success or failure"
Dec 16 15:07:55.183: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-ade85991-5307-447d-939d-2da1d0d87c3e container client-container: 
STEP: delete the pod
Dec 16 15:07:55.331: INFO: Waiting for pod downwardapi-volume-ade85991-5307-447d-939d-2da1d0d87c3e to disappear
Dec 16 15:07:55.339: INFO: Pod downwardapi-volume-ade85991-5307-447d-939d-2da1d0d87c3e no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 15:07:55.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5562" for this suite.
Dec 16 15:08:01.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 15:08:01.490: INFO: namespace downward-api-5562 deletion completed in 6.143296572s

• [SLOW TEST:16.542 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 15:08:01.493: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 16 15:08:01.657: INFO: Waiting up to 5m0s for pod "downwardapi-volume-07216983-df2c-4f12-a8d7-48d4cfea14a3" in namespace "projected-1949" to be "success or failure"
Dec 16 15:08:01.685: INFO: Pod "downwardapi-volume-07216983-df2c-4f12-a8d7-48d4cfea14a3": Phase="Pending", Reason="", readiness=false. Elapsed: 27.936007ms
Dec 16 15:08:03.699: INFO: Pod "downwardapi-volume-07216983-df2c-4f12-a8d7-48d4cfea14a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041703255s
Dec 16 15:08:05.708: INFO: Pod "downwardapi-volume-07216983-df2c-4f12-a8d7-48d4cfea14a3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050469623s
Dec 16 15:08:07.718: INFO: Pod "downwardapi-volume-07216983-df2c-4f12-a8d7-48d4cfea14a3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060517789s
Dec 16 15:08:09.728: INFO: Pod "downwardapi-volume-07216983-df2c-4f12-a8d7-48d4cfea14a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.071037958s
STEP: Saw pod success
Dec 16 15:08:09.729: INFO: Pod "downwardapi-volume-07216983-df2c-4f12-a8d7-48d4cfea14a3" satisfied condition "success or failure"
Dec 16 15:08:09.734: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-07216983-df2c-4f12-a8d7-48d4cfea14a3 container client-container: 
STEP: delete the pod
Dec 16 15:08:09.943: INFO: Waiting for pod downwardapi-volume-07216983-df2c-4f12-a8d7-48d4cfea14a3 to disappear
Dec 16 15:08:10.002: INFO: Pod downwardapi-volume-07216983-df2c-4f12-a8d7-48d4cfea14a3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 15:08:10.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1949" for this suite.
Dec 16 15:08:16.031: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 15:08:16.141: INFO: namespace projected-1949 deletion completed in 6.12966467s

• [SLOW TEST:14.648 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 16 15:08:16.141: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Dec 16 15:08:16.291: INFO: Waiting up to 5m0s for pod "client-containers-73e52879-e842-4f57-b553-c3b30ffb2f32" in namespace "containers-1452" to be "success or failure"
Dec 16 15:08:16.358: INFO: Pod "client-containers-73e52879-e842-4f57-b553-c3b30ffb2f32": Phase="Pending", Reason="", readiness=false. Elapsed: 66.009507ms
Dec 16 15:08:18.365: INFO: Pod "client-containers-73e52879-e842-4f57-b553-c3b30ffb2f32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073633775s
Dec 16 15:08:20.379: INFO: Pod "client-containers-73e52879-e842-4f57-b553-c3b30ffb2f32": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086949162s
Dec 16 15:08:22.386: INFO: Pod "client-containers-73e52879-e842-4f57-b553-c3b30ffb2f32": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09449284s
Dec 16 15:08:24.408: INFO: Pod "client-containers-73e52879-e842-4f57-b553-c3b30ffb2f32": Phase="Pending", Reason="", readiness=false. Elapsed: 8.116201011s
Dec 16 15:08:26.415: INFO: Pod "client-containers-73e52879-e842-4f57-b553-c3b30ffb2f32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.122969368s
STEP: Saw pod success
Dec 16 15:08:26.415: INFO: Pod "client-containers-73e52879-e842-4f57-b553-c3b30ffb2f32" satisfied condition "success or failure"
Dec 16 15:08:26.420: INFO: Trying to get logs from node iruya-node pod client-containers-73e52879-e842-4f57-b553-c3b30ffb2f32 container test-container: 
STEP: delete the pod
Dec 16 15:08:26.616: INFO: Waiting for pod client-containers-73e52879-e842-4f57-b553-c3b30ffb2f32 to disappear
Dec 16 15:08:26.622: INFO: Pod client-containers-73e52879-e842-4f57-b553-c3b30ffb2f32 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 16 15:08:26.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1452" for this suite.
Dec 16 15:08:32.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 15:08:32.766: INFO: namespace containers-1452 deletion completed in 6.137876435s

• [SLOW TEST:16.625 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SDec 16 15:08:32.766: INFO: Running AfterSuite actions on all nodes
Dec 16 15:08:32.766: INFO: Running AfterSuite actions on node 1
Dec 16 15:08:32.766: INFO: Skipping dumping logs from cluster

Ran 215 of 4412 Specs in 7931.259 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped
PASS