I1215 12:56:09.649322 8 e2e.go:243] Starting e2e run "d05d2190-cb43-4fed-bf15-7846e176820e" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1576414567 - Will randomize all specs Will run 215 of 4412 specs Dec 15 12:56:10.286: INFO: >>> kubeConfig: /root/.kube/config Dec 15 12:56:10.290: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Dec 15 12:56:10.342: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Dec 15 12:56:10.375: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Dec 15 12:56:10.375: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Dec 15 12:56:10.375: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Dec 15 12:56:10.385: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Dec 15 12:56:10.385: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Dec 15 12:56:10.385: INFO: e2e test version: v1.15.7 Dec 15 12:56:10.386: INFO: kube-apiserver version: v1.15.1 SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 12:56:10.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir Dec 15 12:56:10.547: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Dec 15 12:56:10.559: INFO: Waiting up to 5m0s for pod "pod-76348763-3e89-4904-a845-b08ee75082c0" in namespace "emptydir-7110" to be "success or failure" Dec 15 12:56:10.580: INFO: Pod "pod-76348763-3e89-4904-a845-b08ee75082c0": Phase="Pending", Reason="", readiness=false. Elapsed: 21.181336ms Dec 15 12:56:12.606: INFO: Pod "pod-76348763-3e89-4904-a845-b08ee75082c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047140591s Dec 15 12:56:14.635: INFO: Pod "pod-76348763-3e89-4904-a845-b08ee75082c0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075814145s Dec 15 12:56:16.645: INFO: Pod "pod-76348763-3e89-4904-a845-b08ee75082c0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085963889s Dec 15 12:56:18.657: INFO: Pod "pod-76348763-3e89-4904-a845-b08ee75082c0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.097687368s Dec 15 12:56:20.788: INFO: Pod "pod-76348763-3e89-4904-a845-b08ee75082c0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.229479335s Dec 15 12:56:22.800: INFO: Pod "pod-76348763-3e89-4904-a845-b08ee75082c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.241377902s STEP: Saw pod success Dec 15 12:56:22.800: INFO: Pod "pod-76348763-3e89-4904-a845-b08ee75082c0" satisfied condition "success or failure" Dec 15 12:56:22.806: INFO: Trying to get logs from node iruya-node pod pod-76348763-3e89-4904-a845-b08ee75082c0 container test-container: STEP: delete the pod Dec 15 12:56:23.013: INFO: Waiting for pod pod-76348763-3e89-4904-a845-b08ee75082c0 to disappear Dec 15 12:56:23.043: INFO: Pod pod-76348763-3e89-4904-a845-b08ee75082c0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 12:56:23.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7110" for this suite. Dec 15 12:56:29.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 12:56:29.367: INFO: namespace emptydir-7110 deletion completed in 6.319008678s • [SLOW TEST:18.980 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 12:56:29.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Dec 15 12:56:41.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-b9d60d01-9553-4d67-8dc2-0447871a9753 -c busybox-main-container --namespace=emptydir-7645 -- cat /usr/share/volumeshare/shareddata.txt' Dec 15 12:56:44.413: INFO: stderr: "" Dec 15 12:56:44.413: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 12:56:44.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7645" for this suite. Dec 15 12:56:50.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 12:56:50.698: INFO: namespace emptydir-7645 deletion completed in 6.2656255s • [SLOW TEST:21.331 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 12:56:50.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Dec 15 12:56:51.505: INFO: created pod pod-service-account-defaultsa Dec 15 12:56:51.505: INFO: pod pod-service-account-defaultsa service account token volume mount: true Dec 15 12:56:51.524: INFO: created pod pod-service-account-mountsa Dec 15 12:56:51.524: INFO: pod pod-service-account-mountsa service account token volume mount: true Dec 15 12:56:51.589: INFO: created pod pod-service-account-nomountsa Dec 15 12:56:51.590: INFO: pod pod-service-account-nomountsa service account token volume mount: false Dec 15 12:56:51.631: INFO: created pod pod-service-account-defaultsa-mountspec Dec 15 12:56:51.631: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Dec 15 12:56:51.679: INFO: created pod pod-service-account-mountsa-mountspec Dec 15 12:56:51.679: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Dec 15 12:56:51.701: INFO: created pod pod-service-account-nomountsa-mountspec Dec 15 12:56:51.701: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Dec 15 12:56:51.772: INFO: created pod pod-service-account-defaultsa-nomountspec Dec 15 12:56:51.773: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Dec 15 12:56:53.042: INFO: created pod pod-service-account-mountsa-nomountspec Dec 15 12:56:53.042: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Dec 15 12:56:53.082: INFO: created pod pod-service-account-nomountsa-nomountspec Dec 15 12:56:53.082: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 12:56:53.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9097" for this suite. Dec 15 12:57:27.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 12:57:27.364: INFO: namespace svcaccounts-9097 deletion completed in 33.831861395s • [SLOW TEST:36.665 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 12:57:27.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 15 12:57:59.633: INFO: Container started at 2019-12-15 12:57:35 +0000 UTC, pod became ready at 2019-12-15 12:57:59 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 12:57:59.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-344" for this suite. Dec 15 12:58:21.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 12:58:21.747: INFO: namespace container-probe-344 deletion completed in 22.109675436s • [SLOW TEST:54.382 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 12:58:21.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 12:58:33.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8137" for this suite. Dec 15 12:58:40.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 12:58:40.164: INFO: namespace kubelet-test-8137 deletion completed in 6.168938189s • [SLOW TEST:18.416 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 12:58:40.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-9695/configmap-test-b0d20b76-0681-4f7e-8923-1d17a38ea4e9 STEP: Creating a pod to test consume configMaps Dec 15 12:58:40.326: INFO: Waiting up to 5m0s for pod "pod-configmaps-7ddbde23-246d-47cc-97b5-67bbce703acc" in namespace "configmap-9695" to be "success or failure" Dec 15 12:58:40.353: INFO: Pod "pod-configmaps-7ddbde23-246d-47cc-97b5-67bbce703acc": Phase="Pending", Reason="", readiness=false. Elapsed: 26.553718ms Dec 15 12:58:42.607: INFO: Pod "pod-configmaps-7ddbde23-246d-47cc-97b5-67bbce703acc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28041627s Dec 15 12:58:44.619: INFO: Pod "pod-configmaps-7ddbde23-246d-47cc-97b5-67bbce703acc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.292928704s Dec 15 12:58:46.632: INFO: Pod "pod-configmaps-7ddbde23-246d-47cc-97b5-67bbce703acc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.305456967s Dec 15 12:58:48.639: INFO: Pod "pod-configmaps-7ddbde23-246d-47cc-97b5-67bbce703acc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.312948444s Dec 15 12:58:50.652: INFO: Pod "pod-configmaps-7ddbde23-246d-47cc-97b5-67bbce703acc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.326083138s Dec 15 12:58:52.672: INFO: Pod "pod-configmaps-7ddbde23-246d-47cc-97b5-67bbce703acc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.345734388s STEP: Saw pod success Dec 15 12:58:52.672: INFO: Pod "pod-configmaps-7ddbde23-246d-47cc-97b5-67bbce703acc" satisfied condition "success or failure" Dec 15 12:58:52.680: INFO: Trying to get logs from node iruya-node pod pod-configmaps-7ddbde23-246d-47cc-97b5-67bbce703acc container env-test: STEP: delete the pod Dec 15 12:58:52.810: INFO: Waiting for pod pod-configmaps-7ddbde23-246d-47cc-97b5-67bbce703acc to disappear Dec 15 12:58:52.901: INFO: Pod pod-configmaps-7ddbde23-246d-47cc-97b5-67bbce703acc no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 12:58:52.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9695" for this suite. Dec 15 12:58:58.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 12:58:59.133: INFO: namespace configmap-9695 deletion completed in 6.214090623s • [SLOW TEST:18.968 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 12:58:59.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 15 12:58:59.324: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2fc2f50a-a765-4111-8a1a-0673ef3aba36" in namespace "downward-api-796" to be "success or failure" Dec 15 12:58:59.329: INFO: Pod "downwardapi-volume-2fc2f50a-a765-4111-8a1a-0673ef3aba36": Phase="Pending", Reason="", readiness=false. Elapsed: 4.944581ms Dec 15 12:59:01.342: INFO: Pod "downwardapi-volume-2fc2f50a-a765-4111-8a1a-0673ef3aba36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01771512s Dec 15 12:59:03.354: INFO: Pod "downwardapi-volume-2fc2f50a-a765-4111-8a1a-0673ef3aba36": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030419856s Dec 15 12:59:05.370: INFO: Pod "downwardapi-volume-2fc2f50a-a765-4111-8a1a-0673ef3aba36": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046102431s Dec 15 12:59:07.377: INFO: Pod "downwardapi-volume-2fc2f50a-a765-4111-8a1a-0673ef3aba36": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053213051s Dec 15 12:59:09.410: INFO: Pod "downwardapi-volume-2fc2f50a-a765-4111-8a1a-0673ef3aba36": Phase="Pending", Reason="", readiness=false. Elapsed: 10.08589249s Dec 15 12:59:11.419: INFO: Pod "downwardapi-volume-2fc2f50a-a765-4111-8a1a-0673ef3aba36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.094677554s STEP: Saw pod success Dec 15 12:59:11.419: INFO: Pod "downwardapi-volume-2fc2f50a-a765-4111-8a1a-0673ef3aba36" satisfied condition "success or failure" Dec 15 12:59:11.424: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-2fc2f50a-a765-4111-8a1a-0673ef3aba36 container client-container: STEP: delete the pod Dec 15 12:59:11.468: INFO: Waiting for pod downwardapi-volume-2fc2f50a-a765-4111-8a1a-0673ef3aba36 to disappear Dec 15 12:59:11.487: INFO: Pod downwardapi-volume-2fc2f50a-a765-4111-8a1a-0673ef3aba36 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 12:59:11.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-796" for this suite. Dec 15 12:59:17.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 12:59:17.710: INFO: namespace downward-api-796 deletion completed in 6.217747723s • [SLOW TEST:18.576 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 12:59:17.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 12:59:28.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3796" for this suite. Dec 15 12:59:34.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 12:59:34.334: INFO: namespace emptydir-wrapper-3796 deletion completed in 6.263312581s • [SLOW TEST:16.622 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 12:59:34.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-255d63b2-49f5-4e99-8acd-80046ee87136 STEP: Creating configMap with name cm-test-opt-upd-595a80f3-3c96-4375-aac0-68a41df48a08 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-255d63b2-49f5-4e99-8acd-80046ee87136 STEP: Updating configmap cm-test-opt-upd-595a80f3-3c96-4375-aac0-68a41df48a08 STEP: Creating configMap with name cm-test-opt-create-430a157d-2868-44b4-a7dc-31157dc324ae STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:01:11.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8899" for this suite. Dec 15 13:01:33.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:01:33.545: INFO: namespace projected-8899 deletion completed in 22.174784322s • [SLOW TEST:119.209 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:01:33.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Dec 15 13:01:33.735: INFO: Waiting up to 5m0s for pod "pod-e391ad77-b704-49c4-9b0a-99d8c6c22401" in namespace "emptydir-1591" to be "success or failure" Dec 15 13:01:33.825: INFO: Pod "pod-e391ad77-b704-49c4-9b0a-99d8c6c22401": Phase="Pending", Reason="", readiness=false. Elapsed: 89.846498ms Dec 15 13:01:35.835: INFO: Pod "pod-e391ad77-b704-49c4-9b0a-99d8c6c22401": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098983599s Dec 15 13:01:37.848: INFO: Pod "pod-e391ad77-b704-49c4-9b0a-99d8c6c22401": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11199738s Dec 15 13:01:39.861: INFO: Pod "pod-e391ad77-b704-49c4-9b0a-99d8c6c22401": Phase="Pending", Reason="", readiness=false. Elapsed: 6.125594765s Dec 15 13:01:41.877: INFO: Pod "pod-e391ad77-b704-49c4-9b0a-99d8c6c22401": Phase="Pending", Reason="", readiness=false. Elapsed: 8.14091923s Dec 15 13:01:43.890: INFO: Pod "pod-e391ad77-b704-49c4-9b0a-99d8c6c22401": Phase="Pending", Reason="", readiness=false. Elapsed: 10.154826663s Dec 15 13:01:45.905: INFO: Pod "pod-e391ad77-b704-49c4-9b0a-99d8c6c22401": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.16901028s STEP: Saw pod success Dec 15 13:01:45.905: INFO: Pod "pod-e391ad77-b704-49c4-9b0a-99d8c6c22401" satisfied condition "success or failure" Dec 15 13:01:45.910: INFO: Trying to get logs from node iruya-node pod pod-e391ad77-b704-49c4-9b0a-99d8c6c22401 container test-container: STEP: delete the pod Dec 15 13:01:46.010: INFO: Waiting for pod pod-e391ad77-b704-49c4-9b0a-99d8c6c22401 to disappear Dec 15 13:01:46.073: INFO: Pod pod-e391ad77-b704-49c4-9b0a-99d8c6c22401 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:01:46.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1591" for this suite. Dec 15 13:01:52.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:01:52.227: INFO: namespace emptydir-1591 deletion completed in 6.142377361s • [SLOW TEST:18.682 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:01:52.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Dec 15 13:01:52.397: INFO: Waiting up to 5m0s for pod "downward-api-293ed910-4136-41fd-9355-f3f0c695d0f3" in namespace "downward-api-4965" to be "success or failure" Dec 15 13:01:52.407: INFO: Pod "downward-api-293ed910-4136-41fd-9355-f3f0c695d0f3": Phase="Pending", Reason="", readiness=false. Elapsed: 9.485964ms Dec 15 13:01:54.421: INFO: Pod "downward-api-293ed910-4136-41fd-9355-f3f0c695d0f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023865603s Dec 15 13:01:56.433: INFO: Pod "downward-api-293ed910-4136-41fd-9355-f3f0c695d0f3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03561972s Dec 15 13:01:58.457: INFO: Pod "downward-api-293ed910-4136-41fd-9355-f3f0c695d0f3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059973807s Dec 15 13:02:00.473: INFO: Pod "downward-api-293ed910-4136-41fd-9355-f3f0c695d0f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.076123094s STEP: Saw pod success Dec 15 13:02:00.474: INFO: Pod "downward-api-293ed910-4136-41fd-9355-f3f0c695d0f3" satisfied condition "success or failure" Dec 15 13:02:00.480: INFO: Trying to get logs from node iruya-node pod downward-api-293ed910-4136-41fd-9355-f3f0c695d0f3 container dapi-container: STEP: delete the pod Dec 15 13:02:00.689: INFO: Waiting for pod downward-api-293ed910-4136-41fd-9355-f3f0c695d0f3 to disappear Dec 15 13:02:00.696: INFO: Pod downward-api-293ed910-4136-41fd-9355-f3f0c695d0f3 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:02:00.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4965" for this suite. Dec 15 13:02:06.737: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:02:07.035: INFO: namespace downward-api-4965 deletion completed in 6.333992158s • [SLOW TEST:14.808 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:02:07.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 15 13:02:07.173: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Dec 15 13:02:07.202: INFO: Pod name sample-pod: Found 0 pods out of 1 Dec 15 13:02:12.257: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Dec 15 13:02:20.280: INFO: Creating deployment "test-rolling-update-deployment" Dec 15 13:02:20.297: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Dec 15 13:02:20.307: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Dec 15 13:02:22.319: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Dec 15 13:02:22.321: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712011740, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712011740, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712011740, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712011740, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 13:02:24.330: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712011740, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712011740, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712011740, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712011740, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 13:02:26.332: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712011740, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712011740, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712011740, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712011740, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 13:02:28.337: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712011740, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712011740, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712011740, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712011740, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 15 13:02:30.339: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Dec 15 13:02:30.370: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-1188,SelfLink:/apis/apps/v1/namespaces/deployment-1188/deployments/test-rolling-update-deployment,UID:5215dc67-45f5-4d27-9683-48229af032ea,ResourceVersion:16758534,Generation:1,CreationTimestamp:2019-12-15 13:02:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-15 13:02:20 +0000 UTC 2019-12-15 13:02:20 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-15 13:02:29 +0000 UTC 2019-12-15 13:02:20 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Dec 15 13:02:30.384: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-1188,SelfLink:/apis/apps/v1/namespaces/deployment-1188/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:57101736-e47d-4f84-b5d1-899baff5c512,ResourceVersion:16758523,Generation:1,CreationTimestamp:2019-12-15 13:02:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 5215dc67-45f5-4d27-9683-48229af032ea 0xc002d7d277 0xc002d7d278}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Dec 15 13:02:30.385: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Dec 15 13:02:30.385: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-1188,SelfLink:/apis/apps/v1/namespaces/deployment-1188/replicasets/test-rolling-update-controller,UID:fe7b49c8-aa25-479d-8a11-3ccba1ee467e,ResourceVersion:16758533,Generation:2,CreationTimestamp:2019-12-15 13:02:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 5215dc67-45f5-4d27-9683-48229af032ea 0xc002d7d18f 0xc002d7d1a0}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 15 13:02:30.399: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-78vdr" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-78vdr,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-1188,SelfLink:/api/v1/namespaces/deployment-1188/pods/test-rolling-update-deployment-79f6b9d75c-78vdr,UID:45238db1-1256-4492-b56e-6657c9d964bf,ResourceVersion:16758522,Generation:0,CreationTimestamp:2019-12-15 13:02:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 57101736-e47d-4f84-b5d1-899baff5c512 0xc001b20b97 0xc001b20b98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9pfjb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9pfjb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-9pfjb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b20c20} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b20c40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 13:02:20 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 13:02:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 13:02:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 13:02:20 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2019-12-15 13:02:20 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-15 13:02:27 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://ff77539fa6223ddca667be0578af3308fbbcb2fb9eacfd37febab5ac83f81742}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:02:30.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1188" for this suite. Dec 15 13:02:38.442: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:02:38.560: INFO: namespace deployment-1188 deletion completed in 8.153827598s • [SLOW TEST:31.525 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:02:38.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-fc0ae9b2-1245-4ce7-ae12-56e7a8f1992a STEP: Creating a pod to test consume secrets Dec 15 13:02:38.799: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8a80cf24-7547-4437-a7d2-2881e48febc8" in namespace "projected-8378" to be "success or failure" Dec 15 13:02:38.968: INFO: Pod "pod-projected-secrets-8a80cf24-7547-4437-a7d2-2881e48febc8": Phase="Pending", Reason="", readiness=false. Elapsed: 168.378413ms Dec 15 13:02:40.983: INFO: Pod "pod-projected-secrets-8a80cf24-7547-4437-a7d2-2881e48febc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.183570837s Dec 15 13:02:42.993: INFO: Pod "pod-projected-secrets-8a80cf24-7547-4437-a7d2-2881e48febc8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.193542666s Dec 15 13:02:45.020: INFO: Pod "pod-projected-secrets-8a80cf24-7547-4437-a7d2-2881e48febc8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.221232341s Dec 15 13:02:47.029: INFO: Pod "pod-projected-secrets-8a80cf24-7547-4437-a7d2-2881e48febc8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.229985908s Dec 15 13:02:49.045: INFO: Pod "pod-projected-secrets-8a80cf24-7547-4437-a7d2-2881e48febc8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.246058207s Dec 15 13:02:51.055: INFO: Pod "pod-projected-secrets-8a80cf24-7547-4437-a7d2-2881e48febc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.255901025s STEP: Saw pod success Dec 15 13:02:51.055: INFO: Pod "pod-projected-secrets-8a80cf24-7547-4437-a7d2-2881e48febc8" satisfied condition "success or failure" Dec 15 13:02:51.058: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-8a80cf24-7547-4437-a7d2-2881e48febc8 container projected-secret-volume-test: STEP: delete the pod Dec 15 13:02:51.114: INFO: Waiting for pod pod-projected-secrets-8a80cf24-7547-4437-a7d2-2881e48febc8 to disappear Dec 15 13:02:51.215: INFO: Pod pod-projected-secrets-8a80cf24-7547-4437-a7d2-2881e48febc8 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:02:51.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8378" for this suite. Dec 15 13:02:57.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:02:57.403: INFO: namespace projected-8378 deletion completed in 6.183441246s • [SLOW TEST:18.839 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:02:57.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-443ae340-d065-4ef6-abd4-c565213e87c1 in namespace container-probe-2395 Dec 15 13:03:07.555: INFO: Started pod test-webserver-443ae340-d065-4ef6-abd4-c565213e87c1 in namespace container-probe-2395 STEP: checking the pod's current state and verifying that restartCount is present Dec 15 13:03:07.559: INFO: Initial restart count of pod test-webserver-443ae340-d065-4ef6-abd4-c565213e87c1 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:07:09.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2395" for this suite. Dec 15 13:07:15.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:07:15.937: INFO: namespace container-probe-2395 deletion completed in 6.337543992s • [SLOW TEST:258.533 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:07:15.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:07:26.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6814" for this suite. Dec 15 13:08:10.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:08:10.451: INFO: namespace kubelet-test-6814 deletion completed in 44.322511692s • [SLOW TEST:54.513 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:08:10.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Dec 15 13:08:10.638: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3725,SelfLink:/api/v1/namespaces/watch-3725/configmaps/e2e-watch-test-label-changed,UID:ff587ab9-d4ab-43a7-a20d-eae7d04cc7e5,ResourceVersion:16759089,Generation:0,CreationTimestamp:2019-12-15 13:08:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 15 13:08:10.639: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3725,SelfLink:/api/v1/namespaces/watch-3725/configmaps/e2e-watch-test-label-changed,UID:ff587ab9-d4ab-43a7-a20d-eae7d04cc7e5,ResourceVersion:16759090,Generation:0,CreationTimestamp:2019-12-15 13:08:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Dec 15 13:08:10.639: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3725,SelfLink:/api/v1/namespaces/watch-3725/configmaps/e2e-watch-test-label-changed,UID:ff587ab9-d4ab-43a7-a20d-eae7d04cc7e5,ResourceVersion:16759091,Generation:0,CreationTimestamp:2019-12-15 13:08:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Dec 15 13:08:20.889: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3725,SelfLink:/api/v1/namespaces/watch-3725/configmaps/e2e-watch-test-label-changed,UID:ff587ab9-d4ab-43a7-a20d-eae7d04cc7e5,ResourceVersion:16759107,Generation:0,CreationTimestamp:2019-12-15 13:08:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 15 13:08:20.890: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3725,SelfLink:/api/v1/namespaces/watch-3725/configmaps/e2e-watch-test-label-changed,UID:ff587ab9-d4ab-43a7-a20d-eae7d04cc7e5,ResourceVersion:16759108,Generation:0,CreationTimestamp:2019-12-15 13:08:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Dec 15 13:08:20.890: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3725,SelfLink:/api/v1/namespaces/watch-3725/configmaps/e2e-watch-test-label-changed,UID:ff587ab9-d4ab-43a7-a20d-eae7d04cc7e5,ResourceVersion:16759109,Generation:0,CreationTimestamp:2019-12-15 13:08:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:08:20.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3725" for this suite. Dec 15 13:08:26.926: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:08:27.120: INFO: namespace watch-3725 deletion completed in 6.220960635s • [SLOW TEST:16.669 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:08:27.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 15 13:08:27.318: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Dec 15 13:08:32.325: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Dec 15 13:08:36.340: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Dec 15 13:08:48.399: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-1129,SelfLink:/apis/apps/v1/namespaces/deployment-1129/deployments/test-cleanup-deployment,UID:fee359ad-578a-4cd3-a732-33543a46e497,ResourceVersion:16759190,Generation:1,CreationTimestamp:2019-12-15 13:08:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 1,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-15 13:08:36 +0000 UTC 2019-12-15 13:08:36 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-15 13:08:46 +0000 UTC 2019-12-15 13:08:36 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-cleanup-deployment-55bbcbc84c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Dec 15 13:08:48.402: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-1129,SelfLink:/apis/apps/v1/namespaces/deployment-1129/replicasets/test-cleanup-deployment-55bbcbc84c,UID:985491ba-92b2-4c74-a967-0a7802fefd69,ResourceVersion:16759180,Generation:1,CreationTimestamp:2019-12-15 13:08:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment fee359ad-578a-4cd3-a732-33543a46e497 0xc002959007 0xc002959008}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Dec 15 13:08:48.406: INFO: Pod "test-cleanup-deployment-55bbcbc84c-gfsqj" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-gfsqj,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-1129,SelfLink:/api/v1/namespaces/deployment-1129/pods/test-cleanup-deployment-55bbcbc84c-gfsqj,UID:7c124c66-b9cb-428f-a841-3bc283e6873d,ResourceVersion:16759179,Generation:0,CreationTimestamp:2019-12-15 13:08:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 985491ba-92b2-4c74-a967-0a7802fefd69 0xc002d9ba77 0xc002d9ba78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vn6gj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vn6gj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-vn6gj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d9bb00} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d9bb20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 13:08:36 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 13:08:46 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 13:08:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 13:08:36 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2019-12-15 13:08:36 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-15 13:08:45 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://2a6befcd6f6e46eddbcdd12f7bfd61d4731e0b73a47a35d52d6b8099770d73c1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:08:48.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1129" for this suite. Dec 15 13:08:54.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:08:54.558: INFO: namespace deployment-1129 deletion completed in 6.147166746s • [SLOW TEST:27.437 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:08:54.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-44da170d-5306-4b4c-8462-62d84110e198 in namespace container-probe-7000 Dec 15 13:09:04.833: INFO: Started pod liveness-44da170d-5306-4b4c-8462-62d84110e198 in namespace container-probe-7000 STEP: checking the pod's current state and verifying that restartCount is present Dec 15 13:09:04.838: INFO: Initial restart count of pod liveness-44da170d-5306-4b4c-8462-62d84110e198 is 0 Dec 15 13:09:27.025: INFO: Restart count of pod container-probe-7000/liveness-44da170d-5306-4b4c-8462-62d84110e198 is now 1 (22.187798142s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:09:27.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7000" for this suite. Dec 15 13:09:33.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:09:33.180: INFO: namespace container-probe-7000 deletion completed in 6.120992208s • [SLOW TEST:38.621 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:09:33.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Dec 15 13:09:33.396: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-5424,SelfLink:/api/v1/namespaces/watch-5424/configmaps/e2e-watch-test-resource-version,UID:075fc100-afb4-40c0-af87-e0a3b95e9c87,ResourceVersion:16759315,Generation:0,CreationTimestamp:2019-12-15 13:09:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 15 13:09:33.396: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-5424,SelfLink:/api/v1/namespaces/watch-5424/configmaps/e2e-watch-test-resource-version,UID:075fc100-afb4-40c0-af87-e0a3b95e9c87,ResourceVersion:16759316,Generation:0,CreationTimestamp:2019-12-15 13:09:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:09:33.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5424" for this suite. Dec 15 13:09:39.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:09:39.691: INFO: namespace watch-5424 deletion completed in 6.194373606s • [SLOW TEST:6.510 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:09:39.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-eb6d65ff-7420-4b1e-ae19-5bfcbb98f5e8 STEP: Creating secret with name s-test-opt-upd-92eefdcd-061c-405c-a826-9e4e1ad566b0 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-eb6d65ff-7420-4b1e-ae19-5bfcbb98f5e8 STEP: Updating secret s-test-opt-upd-92eefdcd-061c-405c-a826-9e4e1ad566b0 STEP: Creating secret with name s-test-opt-create-81f03d51-fcc6-4ffe-8c2a-b6114458589a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:11:22.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1716" for this suite. Dec 15 13:11:52.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:11:52.644: INFO: namespace secrets-1716 deletion completed in 30.19000244s • [SLOW TEST:132.953 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:11:52.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Dec 15 13:11:52.827: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-587,SelfLink:/api/v1/namespaces/watch-587/configmaps/e2e-watch-test-watch-closed,UID:e895d1ec-06d2-4c1b-9de7-a55206c9fbc1,ResourceVersion:16759554,Generation:0,CreationTimestamp:2019-12-15 13:11:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 15 13:11:52.827: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-587,SelfLink:/api/v1/namespaces/watch-587/configmaps/e2e-watch-test-watch-closed,UID:e895d1ec-06d2-4c1b-9de7-a55206c9fbc1,ResourceVersion:16759555,Generation:0,CreationTimestamp:2019-12-15 13:11:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Dec 15 13:11:52.862: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-587,SelfLink:/api/v1/namespaces/watch-587/configmaps/e2e-watch-test-watch-closed,UID:e895d1ec-06d2-4c1b-9de7-a55206c9fbc1,ResourceVersion:16759556,Generation:0,CreationTimestamp:2019-12-15 13:11:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 15 13:11:52.864: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-587,SelfLink:/api/v1/namespaces/watch-587/configmaps/e2e-watch-test-watch-closed,UID:e895d1ec-06d2-4c1b-9de7-a55206c9fbc1,ResourceVersion:16759557,Generation:0,CreationTimestamp:2019-12-15 13:11:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:11:52.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-587" for this suite. Dec 15 13:11:59.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:11:59.130: INFO: namespace watch-587 deletion completed in 6.220398397s • [SLOW TEST:6.485 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:11:59.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-49126a07-0de8-4549-bab2-847279922203 STEP: Creating a pod to test consume secrets Dec 15 13:11:59.277: INFO: Waiting up to 5m0s for pod "pod-secrets-3d7c07b0-8276-4c23-b145-3d2d9dcf973c" in namespace "secrets-5969" to be "success or failure" Dec 15 13:11:59.313: INFO: Pod "pod-secrets-3d7c07b0-8276-4c23-b145-3d2d9dcf973c": Phase="Pending", Reason="", readiness=false. Elapsed: 36.861051ms Dec 15 13:12:01.325: INFO: Pod "pod-secrets-3d7c07b0-8276-4c23-b145-3d2d9dcf973c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048280599s Dec 15 13:12:03.336: INFO: Pod "pod-secrets-3d7c07b0-8276-4c23-b145-3d2d9dcf973c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059150956s Dec 15 13:12:05.348: INFO: Pod "pod-secrets-3d7c07b0-8276-4c23-b145-3d2d9dcf973c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071436909s Dec 15 13:12:07.357: INFO: Pod "pod-secrets-3d7c07b0-8276-4c23-b145-3d2d9dcf973c": Phase="Running", Reason="", readiness=true. Elapsed: 8.080316731s Dec 15 13:12:09.451: INFO: Pod "pod-secrets-3d7c07b0-8276-4c23-b145-3d2d9dcf973c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.174399336s STEP: Saw pod success Dec 15 13:12:09.451: INFO: Pod "pod-secrets-3d7c07b0-8276-4c23-b145-3d2d9dcf973c" satisfied condition "success or failure" Dec 15 13:12:09.459: INFO: Trying to get logs from node iruya-node pod pod-secrets-3d7c07b0-8276-4c23-b145-3d2d9dcf973c container secret-volume-test: STEP: delete the pod Dec 15 13:12:09.613: INFO: Waiting for pod pod-secrets-3d7c07b0-8276-4c23-b145-3d2d9dcf973c to disappear Dec 15 13:12:09.623: INFO: Pod pod-secrets-3d7c07b0-8276-4c23-b145-3d2d9dcf973c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:12:09.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5969" for this suite. Dec 15 13:12:15.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:12:15.862: INFO: namespace secrets-5969 deletion completed in 6.229254749s • [SLOW TEST:16.732 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:12:15.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-595a8f80-e466-4c2a-9b42-8f6ed3a8caae STEP: Creating a pod to test consume configMaps Dec 15 13:12:16.046: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-38c44665-0b5e-4652-a7ad-8ad25c31d0d9" in namespace "projected-5870" to be "success or failure" Dec 15 13:12:16.101: INFO: Pod "pod-projected-configmaps-38c44665-0b5e-4652-a7ad-8ad25c31d0d9": Phase="Pending", Reason="", readiness=false. Elapsed: 54.831204ms Dec 15 13:12:18.110: INFO: Pod "pod-projected-configmaps-38c44665-0b5e-4652-a7ad-8ad25c31d0d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063267918s Dec 15 13:12:20.118: INFO: Pod "pod-projected-configmaps-38c44665-0b5e-4652-a7ad-8ad25c31d0d9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071819817s Dec 15 13:12:22.127: INFO: Pod "pod-projected-configmaps-38c44665-0b5e-4652-a7ad-8ad25c31d0d9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08029666s Dec 15 13:12:24.135: INFO: Pod "pod-projected-configmaps-38c44665-0b5e-4652-a7ad-8ad25c31d0d9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.0881773s Dec 15 13:12:26.143: INFO: Pod "pod-projected-configmaps-38c44665-0b5e-4652-a7ad-8ad25c31d0d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.096395409s STEP: Saw pod success Dec 15 13:12:26.143: INFO: Pod "pod-projected-configmaps-38c44665-0b5e-4652-a7ad-8ad25c31d0d9" satisfied condition "success or failure" Dec 15 13:12:26.149: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-38c44665-0b5e-4652-a7ad-8ad25c31d0d9 container projected-configmap-volume-test: STEP: delete the pod Dec 15 13:12:26.261: INFO: Waiting for pod pod-projected-configmaps-38c44665-0b5e-4652-a7ad-8ad25c31d0d9 to disappear Dec 15 13:12:26.270: INFO: Pod pod-projected-configmaps-38c44665-0b5e-4652-a7ad-8ad25c31d0d9 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:12:26.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5870" for this suite. Dec 15 13:12:32.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:12:32.431: INFO: namespace projected-5870 deletion completed in 6.152005695s • [SLOW TEST:16.568 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:12:32.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Dec 15 13:12:32.576: INFO: Waiting up to 5m0s for pod "downward-api-ee7c6fc5-8550-4b86-b165-ef416ed7ca8b" in namespace "downward-api-3312" to be "success or failure" Dec 15 13:12:32.725: INFO: Pod "downward-api-ee7c6fc5-8550-4b86-b165-ef416ed7ca8b": Phase="Pending", Reason="", readiness=false. Elapsed: 148.74256ms Dec 15 13:12:34.739: INFO: Pod "downward-api-ee7c6fc5-8550-4b86-b165-ef416ed7ca8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.162656669s Dec 15 13:12:36.753: INFO: Pod "downward-api-ee7c6fc5-8550-4b86-b165-ef416ed7ca8b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.176472708s Dec 15 13:12:38.773: INFO: Pod "downward-api-ee7c6fc5-8550-4b86-b165-ef416ed7ca8b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.196539696s Dec 15 13:12:40.805: INFO: Pod "downward-api-ee7c6fc5-8550-4b86-b165-ef416ed7ca8b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.22848953s Dec 15 13:12:42.816: INFO: Pod "downward-api-ee7c6fc5-8550-4b86-b165-ef416ed7ca8b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.239834042s Dec 15 13:12:44.826: INFO: Pod "downward-api-ee7c6fc5-8550-4b86-b165-ef416ed7ca8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.249134206s STEP: Saw pod success Dec 15 13:12:44.826: INFO: Pod "downward-api-ee7c6fc5-8550-4b86-b165-ef416ed7ca8b" satisfied condition "success or failure" Dec 15 13:12:44.828: INFO: Trying to get logs from node iruya-node pod downward-api-ee7c6fc5-8550-4b86-b165-ef416ed7ca8b container dapi-container: STEP: delete the pod Dec 15 13:12:45.511: INFO: Waiting for pod downward-api-ee7c6fc5-8550-4b86-b165-ef416ed7ca8b to disappear Dec 15 13:12:45.663: INFO: Pod downward-api-ee7c6fc5-8550-4b86-b165-ef416ed7ca8b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:12:45.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3312" for this suite. Dec 15 13:12:51.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:12:51.866: INFO: namespace downward-api-3312 deletion completed in 6.192440808s • [SLOW TEST:19.435 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:12:51.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 15 13:12:51.950: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:12:52.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6143" for this suite. Dec 15 13:12:58.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:12:58.958: INFO: namespace custom-resource-definition-6143 deletion completed in 6.318787737s • [SLOW TEST:7.092 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:12:58.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:13:55.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7024" for this suite. Dec 15 13:14:01.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:14:02.067: INFO: namespace container-runtime-7024 deletion completed in 6.293058991s • [SLOW TEST:63.107 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:14:02.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-c17d992a-96ab-4f59-807a-a3cdded72c65 STEP: Creating a pod to test consume configMaps Dec 15 13:14:02.331: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ec2256fd-8464-4161-89d8-d78e41b593c6" in namespace "projected-2082" to be "success or failure" Dec 15 13:14:02.360: INFO: Pod "pod-projected-configmaps-ec2256fd-8464-4161-89d8-d78e41b593c6": Phase="Pending", Reason="", readiness=false. Elapsed: 28.469933ms Dec 15 13:14:04.371: INFO: Pod "pod-projected-configmaps-ec2256fd-8464-4161-89d8-d78e41b593c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038707292s Dec 15 13:14:06.382: INFO: Pod "pod-projected-configmaps-ec2256fd-8464-4161-89d8-d78e41b593c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050477922s Dec 15 13:14:08.391: INFO: Pod "pod-projected-configmaps-ec2256fd-8464-4161-89d8-d78e41b593c6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059280571s Dec 15 13:14:10.513: INFO: Pod "pod-projected-configmaps-ec2256fd-8464-4161-89d8-d78e41b593c6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.180563227s Dec 15 13:14:13.142: INFO: Pod "pod-projected-configmaps-ec2256fd-8464-4161-89d8-d78e41b593c6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.810369351s Dec 15 13:14:15.163: INFO: Pod "pod-projected-configmaps-ec2256fd-8464-4161-89d8-d78e41b593c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.830819404s STEP: Saw pod success Dec 15 13:14:15.163: INFO: Pod "pod-projected-configmaps-ec2256fd-8464-4161-89d8-d78e41b593c6" satisfied condition "success or failure" Dec 15 13:14:15.173: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-ec2256fd-8464-4161-89d8-d78e41b593c6 container projected-configmap-volume-test: STEP: delete the pod Dec 15 13:14:15.608: INFO: Waiting for pod pod-projected-configmaps-ec2256fd-8464-4161-89d8-d78e41b593c6 to disappear Dec 15 13:14:15.619: INFO: Pod pod-projected-configmaps-ec2256fd-8464-4161-89d8-d78e41b593c6 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:14:15.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2082" for this suite. Dec 15 13:14:21.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:14:21.813: INFO: namespace projected-2082 deletion completed in 6.184484602s • [SLOW TEST:19.745 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:14:21.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 15 13:14:22.192: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"e043c4b6-dcde-4587-8054-b12ccc3955b9", Controller:(*bool)(0xc00270c0da), BlockOwnerDeletion:(*bool)(0xc00270c0db)}} Dec 15 13:14:22.298: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"ff02e018-9541-425d-b0c4-1c746f177060", Controller:(*bool)(0xc00270c27a), BlockOwnerDeletion:(*bool)(0xc00270c27b)}} Dec 15 13:14:22.325: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"d098b5aa-5a5f-42c6-b420-44bd559d91fc", Controller:(*bool)(0xc002d9af72), BlockOwnerDeletion:(*bool)(0xc002d9af73)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:14:27.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8756" for this suite. Dec 15 13:14:33.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:14:33.650: INFO: namespace gc-8756 deletion completed in 6.188956339s • [SLOW TEST:11.837 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:14:33.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Dec 15 13:14:42.359: INFO: Successfully updated pod "pod-update-77d6d76d-5086-49be-a0e1-3322a39bdb58" STEP: verifying the updated pod is in kubernetes Dec 15 13:14:42.385: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:14:42.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-186" for this suite. Dec 15 13:15:04.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:15:04.554: INFO: namespace pods-186 deletion completed in 22.162514769s • [SLOW TEST:30.902 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:15:04.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 15 13:15:04.745: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Dec 15 13:15:04.764: INFO: Number of nodes with available pods: 0 Dec 15 13:15:04.764: INFO: Node iruya-node is running more than one daemon pod Dec 15 13:15:06.067: INFO: Number of nodes with available pods: 0 Dec 15 13:15:06.067: INFO: Node iruya-node is running more than one daemon pod Dec 15 13:15:07.164: INFO: Number of nodes with available pods: 0 Dec 15 13:15:07.165: INFO: Node iruya-node is running more than one daemon pod Dec 15 13:15:07.799: INFO: Number of nodes with available pods: 0 Dec 15 13:15:07.799: INFO: Node iruya-node is running more than one daemon pod Dec 15 13:15:08.822: INFO: Number of nodes with available pods: 0 Dec 15 13:15:08.823: INFO: Node iruya-node is running more than one daemon pod Dec 15 13:15:10.727: INFO: Number of nodes with available pods: 0 Dec 15 13:15:10.727: INFO: Node iruya-node is running more than one daemon pod Dec 15 13:15:11.318: INFO: Number of nodes with available pods: 0 Dec 15 13:15:11.318: INFO: Node iruya-node is running more than one daemon pod Dec 15 13:15:11.789: INFO: Number of nodes with available pods: 0 Dec 15 13:15:11.789: INFO: Node iruya-node is running more than one daemon pod Dec 15 13:15:12.785: INFO: Number of nodes with available pods: 0 Dec 15 13:15:12.785: INFO: Node iruya-node is running more than one daemon pod Dec 15 13:15:13.811: INFO: Number of nodes with available pods: 1 Dec 15 13:15:13.811: INFO: Node iruya-node is running more than one daemon pod Dec 15 13:15:14.817: INFO: Number of nodes with available pods: 1 Dec 15 13:15:14.817: INFO: Node iruya-node is running more than one daemon pod Dec 15 13:15:15.796: INFO: Number of nodes with available pods: 2 Dec 15 13:15:15.796: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Dec 15 13:15:15.912: INFO: Wrong image for pod: daemon-set-lxrwg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 15 13:15:15.913: INFO: Wrong image for pod: daemon-set-r92j8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 15 13:15:17.011: INFO: Wrong image for pod: daemon-set-lxrwg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 15 13:15:17.011: INFO: Wrong image for pod: daemon-set-r92j8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 15 13:15:18.011: INFO: Wrong image for pod: daemon-set-lxrwg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 15 13:15:18.011: INFO: Wrong image for pod: daemon-set-r92j8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 15 13:15:19.014: INFO: Wrong image for pod: daemon-set-lxrwg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 15 13:15:19.014: INFO: Wrong image for pod: daemon-set-r92j8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 15 13:15:20.008: INFO: Wrong image for pod: daemon-set-lxrwg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 15 13:15:20.009: INFO: Pod daemon-set-lxrwg is not available Dec 15 13:15:20.009: INFO: Wrong image for pod: daemon-set-r92j8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 15 13:15:21.007: INFO: Pod daemon-set-pbz4k is not available Dec 15 13:15:21.008: INFO: Wrong image for pod: daemon-set-r92j8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 15 13:15:22.009: INFO: Pod daemon-set-pbz4k is not available Dec 15 13:15:22.009: INFO: Wrong image for pod: daemon-set-r92j8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 15 13:15:23.007: INFO: Pod daemon-set-pbz4k is not available Dec 15 13:15:23.007: INFO: Wrong image for pod: daemon-set-r92j8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 15 13:15:24.012: INFO: Pod daemon-set-pbz4k is not available Dec 15 13:15:24.012: INFO: Wrong image for pod: daemon-set-r92j8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 15 13:15:25.494: INFO: Pod daemon-set-pbz4k is not available Dec 15 13:15:25.494: INFO: Wrong image for pod: daemon-set-r92j8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 15 13:15:27.352: INFO: Pod daemon-set-pbz4k is not available Dec 15 13:15:27.352: INFO: Wrong image for pod: daemon-set-r92j8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 15 13:15:28.016: INFO: Pod daemon-set-pbz4k is not available Dec 15 13:15:28.016: INFO: Wrong image for pod: daemon-set-r92j8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 15 13:15:29.008: INFO: Pod daemon-set-pbz4k is not available Dec 15 13:15:29.008: INFO: Wrong image for pod: daemon-set-r92j8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 15 13:15:30.019: INFO: Wrong image for pod: daemon-set-r92j8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 15 13:15:31.012: INFO: Wrong image for pod: daemon-set-r92j8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 15 13:15:32.058: INFO: Wrong image for pod: daemon-set-r92j8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 15 13:15:33.014: INFO: Wrong image for pod: daemon-set-r92j8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 15 13:15:34.012: INFO: Wrong image for pod: daemon-set-r92j8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 15 13:15:35.012: INFO: Wrong image for pod: daemon-set-r92j8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 15 13:15:36.011: INFO: Wrong image for pod: daemon-set-r92j8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 15 13:15:37.007: INFO: Wrong image for pod: daemon-set-r92j8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 15 13:15:37.007: INFO: Pod daemon-set-r92j8 is not available Dec 15 13:15:38.012: INFO: Pod daemon-set-m67xf is not available STEP: Check that daemon pods are still running on every node of the cluster. Dec 15 13:15:38.024: INFO: Number of nodes with available pods: 1 Dec 15 13:15:38.024: INFO: Node iruya-node is running more than one daemon pod Dec 15 13:15:39.044: INFO: Number of nodes with available pods: 1 Dec 15 13:15:39.044: INFO: Node iruya-node is running more than one daemon pod Dec 15 13:15:40.053: INFO: Number of nodes with available pods: 1 Dec 15 13:15:40.053: INFO: Node iruya-node is running more than one daemon pod Dec 15 13:15:41.055: INFO: Number of nodes with available pods: 1 Dec 15 13:15:41.055: INFO: Node iruya-node is running more than one daemon pod Dec 15 13:15:42.120: INFO: Number of nodes with available pods: 1 Dec 15 13:15:42.120: INFO: Node iruya-node is running more than one daemon pod Dec 15 13:15:43.039: INFO: Number of nodes with available pods: 1 Dec 15 13:15:43.039: INFO: Node iruya-node is running more than one daemon pod Dec 15 13:15:44.037: INFO: Number of nodes with available pods: 1 Dec 15 13:15:44.037: INFO: Node iruya-node is running more than one daemon pod Dec 15 13:15:45.038: INFO: Number of nodes with available pods: 1 Dec 15 13:15:45.038: INFO: Node iruya-node is running more than one daemon pod Dec 15 13:15:46.047: INFO: Number of nodes with available pods: 1 Dec 15 13:15:46.048: INFO: Node iruya-node is running more than one daemon pod Dec 15 13:15:47.036: INFO: Number of nodes with available pods: 1 Dec 15 13:15:47.036: INFO: Node iruya-node is running more than one daemon pod Dec 15 13:15:48.069: INFO: Number of nodes with available pods: 2 Dec 15 13:15:48.069: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3167, will wait for the garbage collector to delete the pods Dec 15 13:15:48.151: INFO: Deleting DaemonSet.extensions daemon-set took: 8.041006ms Dec 15 13:15:48.452: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.794559ms Dec 15 13:15:57.862: INFO: Number of nodes with available pods: 0 Dec 15 13:15:57.862: INFO: Number of running nodes: 0, number of available pods: 0 Dec 15 13:15:57.867: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3167/daemonsets","resourceVersion":"16760213"},"items":null} Dec 15 13:15:57.871: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3167/pods","resourceVersion":"16760213"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:15:57.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3167" for this suite. Dec 15 13:16:03.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:16:04.102: INFO: namespace daemonsets-3167 deletion completed in 6.183587566s • [SLOW TEST:59.548 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:16:04.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-569 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-569 to expose endpoints map[] Dec 15 13:16:04.355: INFO: Get endpoints failed (73.905161ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Dec 15 13:16:05.366: INFO: successfully validated that service endpoint-test2 in namespace services-569 exposes endpoints map[] (1.085410734s elapsed) STEP: Creating pod pod1 in namespace services-569 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-569 to expose endpoints map[pod1:[80]] Dec 15 13:16:09.499: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.106579031s elapsed, will retry) Dec 15 13:16:14.630: INFO: successfully validated that service endpoint-test2 in namespace services-569 exposes endpoints map[pod1:[80]] (9.237606104s elapsed) STEP: Creating pod pod2 in namespace services-569 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-569 to expose endpoints map[pod1:[80] pod2:[80]] Dec 15 13:16:19.512: INFO: Unexpected endpoints: found map[30e8c991-6a89-463d-a66b-fc3dcaf1b3df:[80]], expected map[pod1:[80] pod2:[80]] (4.849668873s elapsed, will retry) Dec 15 13:16:22.630: INFO: successfully validated that service endpoint-test2 in namespace services-569 exposes endpoints map[pod1:[80] pod2:[80]] (7.968056417s elapsed) STEP: Deleting pod pod1 in namespace services-569 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-569 to expose endpoints map[pod2:[80]] Dec 15 13:16:23.684: INFO: successfully validated that service endpoint-test2 in namespace services-569 exposes endpoints map[pod2:[80]] (1.043236832s elapsed) STEP: Deleting pod pod2 in namespace services-569 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-569 to expose endpoints map[] Dec 15 13:16:25.084: INFO: successfully validated that service endpoint-test2 in namespace services-569 exposes endpoints map[] (1.367352155s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:16:25.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-569" for this suite. Dec 15 13:16:31.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:16:31.649: INFO: namespace services-569 deletion completed in 6.193888162s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:27.546 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:16:31.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Dec 15 13:16:43.766: INFO: Pod pod-hostip-e87170fe-767f-456a-92a1-4f67c145c8f6 has hostIP: 10.96.3.65 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:16:43.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4501" for this suite. Dec 15 13:17:21.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:17:21.974: INFO: namespace pods-4501 deletion completed in 38.195746411s • [SLOW TEST:50.324 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:17:21.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Dec 15 13:17:22.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8809' Dec 15 13:17:25.407: INFO: stderr: "" Dec 15 13:17:25.407: INFO: stdout: "pod/pause created\n" Dec 15 13:17:25.407: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Dec 15 13:17:25.408: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-8809" to be "running and ready" Dec 15 13:17:25.510: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 101.938ms Dec 15 13:17:27.545: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136841376s Dec 15 13:17:29.560: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.152241467s Dec 15 13:17:31.575: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.167650394s Dec 15 13:17:33.590: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.181794995s Dec 15 13:17:35.600: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.191927318s Dec 15 13:17:35.600: INFO: Pod "pause" satisfied condition "running and ready" Dec 15 13:17:35.600: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Dec 15 13:17:35.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-8809' Dec 15 13:17:35.817: INFO: stderr: "" Dec 15 13:17:35.817: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Dec 15 13:17:35.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8809' Dec 15 13:17:35.928: INFO: stderr: "" Dec 15 13:17:35.928: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 10s testing-label-value\n" STEP: removing the label testing-label of a pod Dec 15 13:17:35.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-8809' Dec 15 13:17:36.110: INFO: stderr: "" Dec 15 13:17:36.110: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Dec 15 13:17:36.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8809' Dec 15 13:17:36.379: INFO: stderr: "" Dec 15 13:17:36.380: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 11s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Dec 15 13:17:36.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8809' Dec 15 13:17:36.653: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 15 13:17:36.653: INFO: stdout: "pod \"pause\" force deleted\n" Dec 15 13:17:36.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-8809' Dec 15 13:17:36.941: INFO: stderr: "No resources found.\n" Dec 15 13:17:36.941: INFO: stdout: "" Dec 15 13:17:36.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-8809 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 15 13:17:37.160: INFO: stderr: "" Dec 15 13:17:37.160: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:17:37.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8809" for this suite. Dec 15 13:17:43.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:17:43.459: INFO: namespace kubectl-8809 deletion completed in 6.279398794s • [SLOW TEST:21.485 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:17:43.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-9270e2af-7f08-478d-98db-76b7541113a0 STEP: Creating a pod to test consume configMaps Dec 15 13:17:43.547: INFO: Waiting up to 5m0s for pod "pod-configmaps-f8cb2d3c-077f-4bf8-807d-3adea4a2652c" in namespace "configmap-2616" to be "success or failure" Dec 15 13:17:43.551: INFO: Pod "pod-configmaps-f8cb2d3c-077f-4bf8-807d-3adea4a2652c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064144ms Dec 15 13:17:46.192: INFO: Pod "pod-configmaps-f8cb2d3c-077f-4bf8-807d-3adea4a2652c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.644278028s Dec 15 13:17:48.205: INFO: Pod "pod-configmaps-f8cb2d3c-077f-4bf8-807d-3adea4a2652c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.657333069s Dec 15 13:17:50.220: INFO: Pod "pod-configmaps-f8cb2d3c-077f-4bf8-807d-3adea4a2652c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.672082271s Dec 15 13:17:52.251: INFO: Pod "pod-configmaps-f8cb2d3c-077f-4bf8-807d-3adea4a2652c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.703497268s Dec 15 13:17:54.267: INFO: Pod "pod-configmaps-f8cb2d3c-077f-4bf8-807d-3adea4a2652c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.719594586s Dec 15 13:17:56.280: INFO: Pod "pod-configmaps-f8cb2d3c-077f-4bf8-807d-3adea4a2652c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.73242096s STEP: Saw pod success Dec 15 13:17:56.280: INFO: Pod "pod-configmaps-f8cb2d3c-077f-4bf8-807d-3adea4a2652c" satisfied condition "success or failure" Dec 15 13:17:56.285: INFO: Trying to get logs from node iruya-node pod pod-configmaps-f8cb2d3c-077f-4bf8-807d-3adea4a2652c container configmap-volume-test: STEP: delete the pod Dec 15 13:17:56.388: INFO: Waiting for pod pod-configmaps-f8cb2d3c-077f-4bf8-807d-3adea4a2652c to disappear Dec 15 13:17:56.399: INFO: Pod pod-configmaps-f8cb2d3c-077f-4bf8-807d-3adea4a2652c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:17:56.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2616" for this suite. Dec 15 13:18:02.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:18:02.689: INFO: namespace configmap-2616 deletion completed in 6.279111996s • [SLOW TEST:19.229 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:18:02.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-0166afa5-0a0f-409e-9485-f523b327fd4f [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:18:02.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-507" for this suite. Dec 15 13:18:08.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:18:09.011: INFO: namespace configmap-507 deletion completed in 6.185255374s • [SLOW TEST:6.322 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:18:09.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Dec 15 13:18:09.246: INFO: Waiting up to 5m0s for pod "pod-792078b9-16da-4211-b97b-f099e18aba22" in namespace "emptydir-3187" to be "success or failure" Dec 15 13:18:09.256: INFO: Pod "pod-792078b9-16da-4211-b97b-f099e18aba22": Phase="Pending", Reason="", readiness=false. Elapsed: 10.196951ms Dec 15 13:18:11.264: INFO: Pod "pod-792078b9-16da-4211-b97b-f099e18aba22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018380672s Dec 15 13:18:13.294: INFO: Pod "pod-792078b9-16da-4211-b97b-f099e18aba22": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047868779s Dec 15 13:18:15.365: INFO: Pod "pod-792078b9-16da-4211-b97b-f099e18aba22": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118787967s Dec 15 13:18:17.397: INFO: Pod "pod-792078b9-16da-4211-b97b-f099e18aba22": Phase="Pending", Reason="", readiness=false. Elapsed: 8.151346643s Dec 15 13:18:19.411: INFO: Pod "pod-792078b9-16da-4211-b97b-f099e18aba22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.16496665s STEP: Saw pod success Dec 15 13:18:19.411: INFO: Pod "pod-792078b9-16da-4211-b97b-f099e18aba22" satisfied condition "success or failure" Dec 15 13:18:19.420: INFO: Trying to get logs from node iruya-node pod pod-792078b9-16da-4211-b97b-f099e18aba22 container test-container: STEP: delete the pod Dec 15 13:18:19.560: INFO: Waiting for pod pod-792078b9-16da-4211-b97b-f099e18aba22 to disappear Dec 15 13:18:19.568: INFO: Pod pod-792078b9-16da-4211-b97b-f099e18aba22 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:18:19.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3187" for this suite. Dec 15 13:18:25.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:18:25.708: INFO: namespace emptydir-3187 deletion completed in 6.130489071s • [SLOW TEST:16.697 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:18:25.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-8dcbfb65-59d1-40ee-991a-53d3810e3bc3 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:18:40.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8377" for this suite. Dec 15 13:19:02.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:19:02.655: INFO: namespace configmap-8377 deletion completed in 22.350267973s • [SLOW TEST:36.946 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:19:02.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 15 13:19:02.870: INFO: Waiting up to 5m0s for pod "downwardapi-volume-92c1213b-fee9-44aa-9e6b-2c35a0e3847f" in namespace "downward-api-9801" to be "success or failure" Dec 15 13:19:02.892: INFO: Pod "downwardapi-volume-92c1213b-fee9-44aa-9e6b-2c35a0e3847f": Phase="Pending", Reason="", readiness=false. Elapsed: 21.818705ms Dec 15 13:19:04.903: INFO: Pod "downwardapi-volume-92c1213b-fee9-44aa-9e6b-2c35a0e3847f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032395131s Dec 15 13:19:06.917: INFO: Pod "downwardapi-volume-92c1213b-fee9-44aa-9e6b-2c35a0e3847f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045975402s Dec 15 13:19:08.928: INFO: Pod "downwardapi-volume-92c1213b-fee9-44aa-9e6b-2c35a0e3847f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057151841s Dec 15 13:19:10.963: INFO: Pod "downwardapi-volume-92c1213b-fee9-44aa-9e6b-2c35a0e3847f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.092708373s Dec 15 13:19:12.977: INFO: Pod "downwardapi-volume-92c1213b-fee9-44aa-9e6b-2c35a0e3847f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.105905459s STEP: Saw pod success Dec 15 13:19:12.977: INFO: Pod "downwardapi-volume-92c1213b-fee9-44aa-9e6b-2c35a0e3847f" satisfied condition "success or failure" Dec 15 13:19:12.981: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-92c1213b-fee9-44aa-9e6b-2c35a0e3847f container client-container: STEP: delete the pod Dec 15 13:19:13.079: INFO: Waiting for pod downwardapi-volume-92c1213b-fee9-44aa-9e6b-2c35a0e3847f to disappear Dec 15 13:19:13.086: INFO: Pod downwardapi-volume-92c1213b-fee9-44aa-9e6b-2c35a0e3847f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:19:13.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9801" for this suite. Dec 15 13:19:19.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:19:19.243: INFO: namespace downward-api-9801 deletion completed in 6.15169163s • [SLOW TEST:16.587 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:19:19.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:19:19.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7020" for this suite. Dec 15 13:19:43.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:19:43.520: INFO: namespace pods-7020 deletion completed in 24.153201062s • [SLOW TEST:24.277 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:19:43.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Dec 15 13:19:43.656: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:19:43.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2056" for this suite. Dec 15 13:19:49.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:19:49.939: INFO: namespace kubectl-2056 deletion completed in 6.191707004s • [SLOW TEST:6.417 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:19:49.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Dec 15 13:19:50.062: INFO: Waiting up to 5m0s for pod "pod-5f33bc43-3de1-431a-b3d9-e73c44bdfd29" in namespace "emptydir-7375" to be "success or failure" Dec 15 13:19:50.074: INFO: Pod "pod-5f33bc43-3de1-431a-b3d9-e73c44bdfd29": Phase="Pending", Reason="", readiness=false. Elapsed: 12.155691ms Dec 15 13:19:52.087: INFO: Pod "pod-5f33bc43-3de1-431a-b3d9-e73c44bdfd29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024674353s Dec 15 13:19:54.098: INFO: Pod "pod-5f33bc43-3de1-431a-b3d9-e73c44bdfd29": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035817905s Dec 15 13:19:56.105: INFO: Pod "pod-5f33bc43-3de1-431a-b3d9-e73c44bdfd29": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043129279s Dec 15 13:19:58.117: INFO: Pod "pod-5f33bc43-3de1-431a-b3d9-e73c44bdfd29": Phase="Pending", Reason="", readiness=false. Elapsed: 8.054830519s Dec 15 13:20:00.136: INFO: Pod "pod-5f33bc43-3de1-431a-b3d9-e73c44bdfd29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.073797079s STEP: Saw pod success Dec 15 13:20:00.136: INFO: Pod "pod-5f33bc43-3de1-431a-b3d9-e73c44bdfd29" satisfied condition "success or failure" Dec 15 13:20:00.146: INFO: Trying to get logs from node iruya-node pod pod-5f33bc43-3de1-431a-b3d9-e73c44bdfd29 container test-container: STEP: delete the pod Dec 15 13:20:00.252: INFO: Waiting for pod pod-5f33bc43-3de1-431a-b3d9-e73c44bdfd29 to disappear Dec 15 13:20:00.264: INFO: Pod pod-5f33bc43-3de1-431a-b3d9-e73c44bdfd29 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:20:00.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7375" for this suite. Dec 15 13:20:06.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:20:06.496: INFO: namespace emptydir-7375 deletion completed in 6.22590603s • [SLOW TEST:16.557 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:20:06.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 15 13:20:06.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-5380' Dec 15 13:20:06.893: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 15 13:20:06.893: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Dec 15 13:20:10.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-5380' Dec 15 13:20:11.359: INFO: stderr: "" Dec 15 13:20:11.359: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:20:11.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5380" for this suite. Dec 15 13:20:17.399: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:20:17.603: INFO: namespace kubectl-5380 deletion completed in 6.235735191s • [SLOW TEST:11.105 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:20:17.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 15 13:20:17.676: INFO: Waiting up to 5m0s for pod "downwardapi-volume-58695b78-27bf-4856-8e8a-23c37c9e40bc" in namespace "projected-4348" to be "success or failure" Dec 15 13:20:17.735: INFO: Pod "downwardapi-volume-58695b78-27bf-4856-8e8a-23c37c9e40bc": Phase="Pending", Reason="", readiness=false. Elapsed: 59.329665ms Dec 15 13:20:19.746: INFO: Pod "downwardapi-volume-58695b78-27bf-4856-8e8a-23c37c9e40bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069573355s Dec 15 13:20:21.810: INFO: Pod "downwardapi-volume-58695b78-27bf-4856-8e8a-23c37c9e40bc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.133552953s Dec 15 13:20:23.836: INFO: Pod "downwardapi-volume-58695b78-27bf-4856-8e8a-23c37c9e40bc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.159995265s Dec 15 13:20:25.849: INFO: Pod "downwardapi-volume-58695b78-27bf-4856-8e8a-23c37c9e40bc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.172453735s Dec 15 13:20:27.872: INFO: Pod "downwardapi-volume-58695b78-27bf-4856-8e8a-23c37c9e40bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.195446694s STEP: Saw pod success Dec 15 13:20:27.872: INFO: Pod "downwardapi-volume-58695b78-27bf-4856-8e8a-23c37c9e40bc" satisfied condition "success or failure" Dec 15 13:20:27.889: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-58695b78-27bf-4856-8e8a-23c37c9e40bc container client-container: STEP: delete the pod Dec 15 13:20:28.139: INFO: Waiting for pod downwardapi-volume-58695b78-27bf-4856-8e8a-23c37c9e40bc to disappear Dec 15 13:20:28.184: INFO: Pod downwardapi-volume-58695b78-27bf-4856-8e8a-23c37c9e40bc no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:20:28.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4348" for this suite. Dec 15 13:20:34.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:20:34.374: INFO: namespace projected-4348 deletion completed in 6.180559222s • [SLOW TEST:16.771 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:20:34.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:20:34.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4416" for this suite. Dec 15 13:20:40.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:20:40.724: INFO: namespace services-4416 deletion completed in 6.240291987s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.349 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:20:40.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Dec 15 13:20:40.951: INFO: Waiting up to 5m0s for pod "var-expansion-314c36c7-43db-4a54-bcee-d7add890b5e5" in namespace "var-expansion-3203" to be "success or failure" Dec 15 13:20:41.046: INFO: Pod "var-expansion-314c36c7-43db-4a54-bcee-d7add890b5e5": Phase="Pending", Reason="", readiness=false. Elapsed: 95.321886ms Dec 15 13:20:43.057: INFO: Pod "var-expansion-314c36c7-43db-4a54-bcee-d7add890b5e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105667505s Dec 15 13:20:45.072: INFO: Pod "var-expansion-314c36c7-43db-4a54-bcee-d7add890b5e5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.121138397s Dec 15 13:20:47.080: INFO: Pod "var-expansion-314c36c7-43db-4a54-bcee-d7add890b5e5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.129192342s Dec 15 13:20:49.098: INFO: Pod "var-expansion-314c36c7-43db-4a54-bcee-d7add890b5e5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.147059655s Dec 15 13:20:51.109: INFO: Pod "var-expansion-314c36c7-43db-4a54-bcee-d7add890b5e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.15745052s STEP: Saw pod success Dec 15 13:20:51.109: INFO: Pod "var-expansion-314c36c7-43db-4a54-bcee-d7add890b5e5" satisfied condition "success or failure" Dec 15 13:20:51.114: INFO: Trying to get logs from node iruya-node pod var-expansion-314c36c7-43db-4a54-bcee-d7add890b5e5 container dapi-container: STEP: delete the pod Dec 15 13:20:51.281: INFO: Waiting for pod var-expansion-314c36c7-43db-4a54-bcee-d7add890b5e5 to disappear Dec 15 13:20:51.308: INFO: Pod var-expansion-314c36c7-43db-4a54-bcee-d7add890b5e5 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:20:51.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3203" for this suite. Dec 15 13:20:57.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:20:57.528: INFO: namespace var-expansion-3203 deletion completed in 6.212349671s • [SLOW TEST:16.800 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:20:57.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Dec 15 13:20:57.802: INFO: Waiting up to 5m0s for pod "var-expansion-10974c83-a4e9-4f32-b7a5-f96dfd00a4b6" in namespace "var-expansion-137" to be "success or failure" Dec 15 13:20:57.813: INFO: Pod "var-expansion-10974c83-a4e9-4f32-b7a5-f96dfd00a4b6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.76177ms Dec 15 13:20:59.835: INFO: Pod "var-expansion-10974c83-a4e9-4f32-b7a5-f96dfd00a4b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032959987s Dec 15 13:21:01.841: INFO: Pod "var-expansion-10974c83-a4e9-4f32-b7a5-f96dfd00a4b6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039288086s Dec 15 13:21:03.860: INFO: Pod "var-expansion-10974c83-a4e9-4f32-b7a5-f96dfd00a4b6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05794926s Dec 15 13:21:05.879: INFO: Pod "var-expansion-10974c83-a4e9-4f32-b7a5-f96dfd00a4b6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.077180012s Dec 15 13:21:07.895: INFO: Pod "var-expansion-10974c83-a4e9-4f32-b7a5-f96dfd00a4b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.092771615s STEP: Saw pod success Dec 15 13:21:07.895: INFO: Pod "var-expansion-10974c83-a4e9-4f32-b7a5-f96dfd00a4b6" satisfied condition "success or failure" Dec 15 13:21:07.908: INFO: Trying to get logs from node iruya-node pod var-expansion-10974c83-a4e9-4f32-b7a5-f96dfd00a4b6 container dapi-container: STEP: delete the pod Dec 15 13:21:08.178: INFO: Waiting for pod var-expansion-10974c83-a4e9-4f32-b7a5-f96dfd00a4b6 to disappear Dec 15 13:21:08.186: INFO: Pod var-expansion-10974c83-a4e9-4f32-b7a5-f96dfd00a4b6 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:21:08.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-137" for this suite. Dec 15 13:21:14.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:21:14.376: INFO: namespace var-expansion-137 deletion completed in 6.182095891s • [SLOW TEST:16.847 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:21:14.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Dec 15 13:21:14.545: INFO: Waiting up to 5m0s for pod "downward-api-c15d6628-0d82-47a8-8f7e-dca7776adaed" in namespace "downward-api-8772" to be "success or failure" Dec 15 13:21:14.563: INFO: Pod "downward-api-c15d6628-0d82-47a8-8f7e-dca7776adaed": Phase="Pending", Reason="", readiness=false. Elapsed: 18.730488ms Dec 15 13:21:16.581: INFO: Pod "downward-api-c15d6628-0d82-47a8-8f7e-dca7776adaed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035951798s Dec 15 13:21:18.600: INFO: Pod "downward-api-c15d6628-0d82-47a8-8f7e-dca7776adaed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055306141s Dec 15 13:21:20.620: INFO: Pod "downward-api-c15d6628-0d82-47a8-8f7e-dca7776adaed": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075339736s Dec 15 13:21:22.633: INFO: Pod "downward-api-c15d6628-0d82-47a8-8f7e-dca7776adaed": Phase="Pending", Reason="", readiness=false. Elapsed: 8.088166426s Dec 15 13:21:24.643: INFO: Pod "downward-api-c15d6628-0d82-47a8-8f7e-dca7776adaed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.097874644s STEP: Saw pod success Dec 15 13:21:24.643: INFO: Pod "downward-api-c15d6628-0d82-47a8-8f7e-dca7776adaed" satisfied condition "success or failure" Dec 15 13:21:24.647: INFO: Trying to get logs from node iruya-node pod downward-api-c15d6628-0d82-47a8-8f7e-dca7776adaed container dapi-container: STEP: delete the pod Dec 15 13:21:24.786: INFO: Waiting for pod downward-api-c15d6628-0d82-47a8-8f7e-dca7776adaed to disappear Dec 15 13:21:24.799: INFO: Pod downward-api-c15d6628-0d82-47a8-8f7e-dca7776adaed no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:21:24.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8772" for this suite. Dec 15 13:21:30.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:21:30.988: INFO: namespace downward-api-8772 deletion completed in 6.182498255s • [SLOW TEST:16.611 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:21:30.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Dec 15 13:21:31.129: INFO: Waiting up to 5m0s for pod "pod-457ef2df-b46a-482e-9565-d90c4cdfba9c" in namespace "emptydir-2062" to be "success or failure" Dec 15 13:21:31.141: INFO: Pod "pod-457ef2df-b46a-482e-9565-d90c4cdfba9c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.027201ms Dec 15 13:21:33.150: INFO: Pod "pod-457ef2df-b46a-482e-9565-d90c4cdfba9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021356029s Dec 15 13:21:35.157: INFO: Pod "pod-457ef2df-b46a-482e-9565-d90c4cdfba9c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027678054s Dec 15 13:21:37.168: INFO: Pod "pod-457ef2df-b46a-482e-9565-d90c4cdfba9c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039106678s Dec 15 13:21:39.462: INFO: Pod "pod-457ef2df-b46a-482e-9565-d90c4cdfba9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.332638283s STEP: Saw pod success Dec 15 13:21:39.462: INFO: Pod "pod-457ef2df-b46a-482e-9565-d90c4cdfba9c" satisfied condition "success or failure" Dec 15 13:21:39.468: INFO: Trying to get logs from node iruya-node pod pod-457ef2df-b46a-482e-9565-d90c4cdfba9c container test-container: STEP: delete the pod Dec 15 13:21:39.515: INFO: Waiting for pod pod-457ef2df-b46a-482e-9565-d90c4cdfba9c to disappear Dec 15 13:21:39.522: INFO: Pod pod-457ef2df-b46a-482e-9565-d90c4cdfba9c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:21:39.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2062" for this suite. Dec 15 13:21:45.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:21:45.759: INFO: namespace emptydir-2062 deletion completed in 6.23188812s • [SLOW TEST:14.771 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:21:45.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-fe9065a4-6895-460c-9b68-e0c575b03e6f STEP: Creating a pod to test consume configMaps Dec 15 13:21:45.951: INFO: Waiting up to 5m0s for pod "pod-configmaps-e519ce58-b4e3-44cc-9e1e-938f40826d26" in namespace "configmap-7526" to be "success or failure" Dec 15 13:21:45.968: INFO: Pod "pod-configmaps-e519ce58-b4e3-44cc-9e1e-938f40826d26": Phase="Pending", Reason="", readiness=false. Elapsed: 16.79483ms Dec 15 13:21:47.980: INFO: Pod "pod-configmaps-e519ce58-b4e3-44cc-9e1e-938f40826d26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028307168s Dec 15 13:21:50.008: INFO: Pod "pod-configmaps-e519ce58-b4e3-44cc-9e1e-938f40826d26": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056694928s Dec 15 13:21:52.016: INFO: Pod "pod-configmaps-e519ce58-b4e3-44cc-9e1e-938f40826d26": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064258644s Dec 15 13:21:54.034: INFO: Pod "pod-configmaps-e519ce58-b4e3-44cc-9e1e-938f40826d26": Phase="Pending", Reason="", readiness=false. Elapsed: 8.083029265s Dec 15 13:21:56.043: INFO: Pod "pod-configmaps-e519ce58-b4e3-44cc-9e1e-938f40826d26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.091503281s STEP: Saw pod success Dec 15 13:21:56.043: INFO: Pod "pod-configmaps-e519ce58-b4e3-44cc-9e1e-938f40826d26" satisfied condition "success or failure" Dec 15 13:21:56.050: INFO: Trying to get logs from node iruya-node pod pod-configmaps-e519ce58-b4e3-44cc-9e1e-938f40826d26 container configmap-volume-test: STEP: delete the pod Dec 15 13:21:56.147: INFO: Waiting for pod pod-configmaps-e519ce58-b4e3-44cc-9e1e-938f40826d26 to disappear Dec 15 13:21:56.157: INFO: Pod pod-configmaps-e519ce58-b4e3-44cc-9e1e-938f40826d26 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:21:56.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7526" for this suite. Dec 15 13:22:02.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:22:02.331: INFO: namespace configmap-7526 deletion completed in 6.165802615s • [SLOW TEST:16.571 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:22:02.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-40748940-8287-4299-916f-b3cb7e1467c6 STEP: Creating secret with name secret-projected-all-test-volume-d524dc7b-d1ae-4487-9f9a-2a0603e478b2 STEP: Creating a pod to test Check all projections for projected volume plugin Dec 15 13:22:02.500: INFO: Waiting up to 5m0s for pod "projected-volume-bf93dc3b-460f-4258-89fd-071ad9df6b25" in namespace "projected-5601" to be "success or failure" Dec 15 13:22:02.506: INFO: Pod "projected-volume-bf93dc3b-460f-4258-89fd-071ad9df6b25": Phase="Pending", Reason="", readiness=false. Elapsed: 5.346442ms Dec 15 13:22:04.524: INFO: Pod "projected-volume-bf93dc3b-460f-4258-89fd-071ad9df6b25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023830969s Dec 15 13:22:06.534: INFO: Pod "projected-volume-bf93dc3b-460f-4258-89fd-071ad9df6b25": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033509744s Dec 15 13:22:08.606: INFO: Pod "projected-volume-bf93dc3b-460f-4258-89fd-071ad9df6b25": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104923832s Dec 15 13:22:10.640: INFO: Pod "projected-volume-bf93dc3b-460f-4258-89fd-071ad9df6b25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.13891983s STEP: Saw pod success Dec 15 13:22:10.640: INFO: Pod "projected-volume-bf93dc3b-460f-4258-89fd-071ad9df6b25" satisfied condition "success or failure" Dec 15 13:22:10.654: INFO: Trying to get logs from node iruya-node pod projected-volume-bf93dc3b-460f-4258-89fd-071ad9df6b25 container projected-all-volume-test: STEP: delete the pod Dec 15 13:22:10.802: INFO: Waiting for pod projected-volume-bf93dc3b-460f-4258-89fd-071ad9df6b25 to disappear Dec 15 13:22:10.909: INFO: Pod projected-volume-bf93dc3b-460f-4258-89fd-071ad9df6b25 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:22:10.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5601" for this suite. Dec 15 13:22:17.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:22:17.153: INFO: namespace projected-5601 deletion completed in 6.192855665s • [SLOW TEST:14.822 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:22:17.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-04ed241c-325b-49ef-b29a-9c312281c0d8 STEP: Creating a pod to test consume secrets Dec 15 13:22:17.294: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d166ec48-a214-40a3-88fb-3a971b692116" in namespace "projected-3871" to be "success or failure" Dec 15 13:22:17.321: INFO: Pod "pod-projected-secrets-d166ec48-a214-40a3-88fb-3a971b692116": Phase="Pending", Reason="", readiness=false. Elapsed: 26.992931ms Dec 15 13:22:19.333: INFO: Pod "pod-projected-secrets-d166ec48-a214-40a3-88fb-3a971b692116": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03973215s Dec 15 13:22:21.344: INFO: Pod "pod-projected-secrets-d166ec48-a214-40a3-88fb-3a971b692116": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050016554s Dec 15 13:22:23.352: INFO: Pod "pod-projected-secrets-d166ec48-a214-40a3-88fb-3a971b692116": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05825749s Dec 15 13:22:25.362: INFO: Pod "pod-projected-secrets-d166ec48-a214-40a3-88fb-3a971b692116": Phase="Pending", Reason="", readiness=false. Elapsed: 8.068840325s Dec 15 13:22:27.378: INFO: Pod "pod-projected-secrets-d166ec48-a214-40a3-88fb-3a971b692116": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.084767877s STEP: Saw pod success Dec 15 13:22:27.379: INFO: Pod "pod-projected-secrets-d166ec48-a214-40a3-88fb-3a971b692116" satisfied condition "success or failure" Dec 15 13:22:27.385: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-d166ec48-a214-40a3-88fb-3a971b692116 container secret-volume-test: STEP: delete the pod Dec 15 13:22:27.463: INFO: Waiting for pod pod-projected-secrets-d166ec48-a214-40a3-88fb-3a971b692116 to disappear Dec 15 13:22:27.467: INFO: Pod pod-projected-secrets-d166ec48-a214-40a3-88fb-3a971b692116 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:22:27.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3871" for this suite. Dec 15 13:22:35.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:22:35.694: INFO: namespace projected-3871 deletion completed in 8.182944102s • [SLOW TEST:18.541 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:22:35.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Dec 15 13:22:44.393: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8451 pod-service-account-42b85d68-1bc8-4f6e-957e-061329fd35d7 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Dec 15 13:22:45.044: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8451 pod-service-account-42b85d68-1bc8-4f6e-957e-061329fd35d7 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Dec 15 13:22:45.491: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8451 pod-service-account-42b85d68-1bc8-4f6e-957e-061329fd35d7 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:22:45.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8451" for this suite. Dec 15 13:22:52.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:22:52.165: INFO: namespace svcaccounts-8451 deletion completed in 6.176001796s • [SLOW TEST:16.470 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:22:52.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Dec 15 13:22:52.342: INFO: Waiting up to 5m0s for pod "client-containers-4e3244c2-3d14-4112-851e-80001c80d1f9" in namespace "containers-3359" to be "success or failure" Dec 15 13:22:52.352: INFO: Pod "client-containers-4e3244c2-3d14-4112-851e-80001c80d1f9": Phase="Pending", Reason="", readiness=false. Elapsed: 9.903946ms Dec 15 13:22:54.363: INFO: Pod "client-containers-4e3244c2-3d14-4112-851e-80001c80d1f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020863759s Dec 15 13:22:56.374: INFO: Pod "client-containers-4e3244c2-3d14-4112-851e-80001c80d1f9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032496438s Dec 15 13:22:58.384: INFO: Pod "client-containers-4e3244c2-3d14-4112-851e-80001c80d1f9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041852272s Dec 15 13:23:00.394: INFO: Pod "client-containers-4e3244c2-3d14-4112-851e-80001c80d1f9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.051817664s Dec 15 13:23:02.401: INFO: Pod "client-containers-4e3244c2-3d14-4112-851e-80001c80d1f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.059112032s STEP: Saw pod success Dec 15 13:23:02.401: INFO: Pod "client-containers-4e3244c2-3d14-4112-851e-80001c80d1f9" satisfied condition "success or failure" Dec 15 13:23:02.406: INFO: Trying to get logs from node iruya-node pod client-containers-4e3244c2-3d14-4112-851e-80001c80d1f9 container test-container: STEP: delete the pod Dec 15 13:23:02.521: INFO: Waiting for pod client-containers-4e3244c2-3d14-4112-851e-80001c80d1f9 to disappear Dec 15 13:23:02.531: INFO: Pod client-containers-4e3244c2-3d14-4112-851e-80001c80d1f9 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:23:02.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3359" for this suite. Dec 15 13:23:08.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:23:08.747: INFO: namespace containers-3359 deletion completed in 6.204948327s • [SLOW TEST:16.581 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:23:08.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 15 13:23:08.839: INFO: Waiting up to 5m0s for pod "downwardapi-volume-768f40c9-0032-42ac-9df2-f712c1b22396" in namespace "projected-3390" to be "success or failure" Dec 15 13:23:08.897: INFO: Pod "downwardapi-volume-768f40c9-0032-42ac-9df2-f712c1b22396": Phase="Pending", Reason="", readiness=false. Elapsed: 58.564315ms Dec 15 13:23:10.908: INFO: Pod "downwardapi-volume-768f40c9-0032-42ac-9df2-f712c1b22396": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06961502s Dec 15 13:23:13.381: INFO: Pod "downwardapi-volume-768f40c9-0032-42ac-9df2-f712c1b22396": Phase="Pending", Reason="", readiness=false. Elapsed: 4.542075535s Dec 15 13:23:15.390: INFO: Pod "downwardapi-volume-768f40c9-0032-42ac-9df2-f712c1b22396": Phase="Pending", Reason="", readiness=false. Elapsed: 6.550976619s Dec 15 13:23:17.399: INFO: Pod "downwardapi-volume-768f40c9-0032-42ac-9df2-f712c1b22396": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.56017272s STEP: Saw pod success Dec 15 13:23:17.399: INFO: Pod "downwardapi-volume-768f40c9-0032-42ac-9df2-f712c1b22396" satisfied condition "success or failure" Dec 15 13:23:17.404: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-768f40c9-0032-42ac-9df2-f712c1b22396 container client-container: STEP: delete the pod Dec 15 13:23:17.580: INFO: Waiting for pod downwardapi-volume-768f40c9-0032-42ac-9df2-f712c1b22396 to disappear Dec 15 13:23:17.625: INFO: Pod downwardapi-volume-768f40c9-0032-42ac-9df2-f712c1b22396 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:23:17.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3390" for this suite. Dec 15 13:23:23.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:23:23.861: INFO: namespace projected-3390 deletion completed in 6.2251676s • [SLOW TEST:15.114 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:23:23.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Dec 15 13:23:32.757: INFO: Successfully updated pod "labelsupdate31ca9499-ad65-4409-9825-a3505ee76734" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:23:34.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5850" for this suite. Dec 15 13:24:14.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:24:15.029: INFO: namespace projected-5850 deletion completed in 40.185410113s • [SLOW TEST:51.166 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:24:15.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 15 13:24:15.247: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8dfeec73-e45a-486f-a7fc-1477e6f7cfb8" in namespace "downward-api-4925" to be "success or failure" Dec 15 13:24:15.277: INFO: Pod "downwardapi-volume-8dfeec73-e45a-486f-a7fc-1477e6f7cfb8": Phase="Pending", Reason="", readiness=false. Elapsed: 30.167454ms Dec 15 13:24:17.287: INFO: Pod "downwardapi-volume-8dfeec73-e45a-486f-a7fc-1477e6f7cfb8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039660642s Dec 15 13:24:19.295: INFO: Pod "downwardapi-volume-8dfeec73-e45a-486f-a7fc-1477e6f7cfb8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047963037s Dec 15 13:24:21.341: INFO: Pod "downwardapi-volume-8dfeec73-e45a-486f-a7fc-1477e6f7cfb8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093676382s Dec 15 13:24:23.349: INFO: Pod "downwardapi-volume-8dfeec73-e45a-486f-a7fc-1477e6f7cfb8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.101878388s Dec 15 13:24:25.361: INFO: Pod "downwardapi-volume-8dfeec73-e45a-486f-a7fc-1477e6f7cfb8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.11391492s STEP: Saw pod success Dec 15 13:24:25.362: INFO: Pod "downwardapi-volume-8dfeec73-e45a-486f-a7fc-1477e6f7cfb8" satisfied condition "success or failure" Dec 15 13:24:25.365: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-8dfeec73-e45a-486f-a7fc-1477e6f7cfb8 container client-container: STEP: delete the pod Dec 15 13:24:25.547: INFO: Waiting for pod downwardapi-volume-8dfeec73-e45a-486f-a7fc-1477e6f7cfb8 to disappear Dec 15 13:24:25.556: INFO: Pod downwardapi-volume-8dfeec73-e45a-486f-a7fc-1477e6f7cfb8 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:24:25.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4925" for this suite. Dec 15 13:24:31.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:24:31.777: INFO: namespace downward-api-4925 deletion completed in 6.213204037s • [SLOW TEST:16.748 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:24:31.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 15 13:24:31.997: INFO: Waiting up to 5m0s for pod "downwardapi-volume-35467d1f-b306-4271-8dd8-edeafd761634" in namespace "projected-2062" to be "success or failure" Dec 15 13:24:32.014: INFO: Pod "downwardapi-volume-35467d1f-b306-4271-8dd8-edeafd761634": Phase="Pending", Reason="", readiness=false. Elapsed: 16.89416ms Dec 15 13:24:34.030: INFO: Pod "downwardapi-volume-35467d1f-b306-4271-8dd8-edeafd761634": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033104812s Dec 15 13:24:36.043: INFO: Pod "downwardapi-volume-35467d1f-b306-4271-8dd8-edeafd761634": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046091347s Dec 15 13:24:38.052: INFO: Pod "downwardapi-volume-35467d1f-b306-4271-8dd8-edeafd761634": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055068414s Dec 15 13:24:40.060: INFO: Pod "downwardapi-volume-35467d1f-b306-4271-8dd8-edeafd761634": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062575663s Dec 15 13:24:42.067: INFO: Pod "downwardapi-volume-35467d1f-b306-4271-8dd8-edeafd761634": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.069188792s STEP: Saw pod success Dec 15 13:24:42.067: INFO: Pod "downwardapi-volume-35467d1f-b306-4271-8dd8-edeafd761634" satisfied condition "success or failure" Dec 15 13:24:42.069: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-35467d1f-b306-4271-8dd8-edeafd761634 container client-container: STEP: delete the pod Dec 15 13:24:42.210: INFO: Waiting for pod downwardapi-volume-35467d1f-b306-4271-8dd8-edeafd761634 to disappear Dec 15 13:24:42.219: INFO: Pod downwardapi-volume-35467d1f-b306-4271-8dd8-edeafd761634 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:24:42.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2062" for this suite. Dec 15 13:24:48.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:24:48.420: INFO: namespace projected-2062 deletion completed in 6.188163004s • [SLOW TEST:16.641 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:24:48.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 15 13:24:48.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-5547' Dec 15 13:24:48.731: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 15 13:24:48.732: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Dec 15 13:24:50.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-5547' Dec 15 13:24:51.071: INFO: stderr: "" Dec 15 13:24:51.072: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:24:51.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5547" for this suite. Dec 15 13:24:57.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:24:57.495: INFO: namespace kubectl-5547 deletion completed in 6.416198973s • [SLOW TEST:9.074 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:24:57.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Dec 15 13:25:13.869: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 15 13:25:13.911: INFO: Pod pod-with-prestop-http-hook still exists Dec 15 13:25:15.912: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 15 13:25:15.942: INFO: Pod pod-with-prestop-http-hook still exists Dec 15 13:25:17.911: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 15 13:25:17.924: INFO: Pod pod-with-prestop-http-hook still exists Dec 15 13:25:19.911: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 15 13:25:19.918: INFO: Pod pod-with-prestop-http-hook still exists Dec 15 13:25:21.911: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 15 13:25:21.917: INFO: Pod pod-with-prestop-http-hook still exists Dec 15 13:25:23.911: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 15 13:25:23.924: INFO: Pod pod-with-prestop-http-hook still exists Dec 15 13:25:25.911: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 15 13:25:25.924: INFO: Pod pod-with-prestop-http-hook still exists Dec 15 13:25:27.911: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 15 13:25:27.922: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:25:27.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4821" for this suite. Dec 15 13:25:50.327: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:25:50.465: INFO: namespace container-lifecycle-hook-4821 deletion completed in 22.492306118s • [SLOW TEST:52.970 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:25:50.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-9448 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-9448 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9448 Dec 15 13:25:50.796: INFO: Found 0 stateful pods, waiting for 1 Dec 15 13:26:00.806: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Dec 15 13:26:00.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9448 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 15 13:26:01.413: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 15 13:26:01.413: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 15 13:26:01.413: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 15 13:26:01.425: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Dec 15 13:26:11.438: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 15 13:26:11.438: INFO: Waiting for statefulset status.replicas updated to 0 Dec 15 13:26:11.480: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999606s Dec 15 13:26:12.494: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.976705723s Dec 15 13:26:13.506: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.962595902s Dec 15 13:26:14.527: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.949996977s Dec 15 13:26:15.559: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.929458244s Dec 15 13:26:16.590: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.896528902s Dec 15 13:26:17.607: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.865926474s Dec 15 13:26:18.616: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.849951854s Dec 15 13:26:19.643: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.839889852s Dec 15 13:26:20.656: INFO: Verifying statefulset ss doesn't scale past 1 for another 813.916242ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9448 Dec 15 13:26:21.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 15 13:26:22.409: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Dec 15 13:26:22.409: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 15 13:26:22.409: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 15 13:26:22.419: INFO: Found 1 stateful pods, waiting for 3 Dec 15 13:26:32.473: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 15 13:26:32.473: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Dec 15 13:26:32.473: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Dec 15 13:26:42.435: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 15 13:26:42.435: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Dec 15 13:26:42.435: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Dec 15 13:26:42.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9448 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 15 13:26:43.199: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 15 13:26:43.199: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 15 13:26:43.199: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 15 13:26:43.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9448 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 15 13:26:43.761: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 15 13:26:43.761: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 15 13:26:43.761: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 15 13:26:43.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9448 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 15 13:26:44.198: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 15 13:26:44.198: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 15 13:26:44.198: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 15 13:26:44.198: INFO: Waiting for statefulset status.replicas updated to 0 Dec 15 13:26:44.207: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Dec 15 13:26:54.231: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 15 13:26:54.231: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Dec 15 13:26:54.231: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Dec 15 13:26:54.254: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999612s Dec 15 13:26:55.342: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.987519885s Dec 15 13:26:56.362: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.899245863s Dec 15 13:26:57.420: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.878615765s Dec 15 13:26:58.431: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.821655511s Dec 15 13:26:59.619: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.810351135s Dec 15 13:27:00.645: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.621880314s Dec 15 13:27:01.656: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.595731353s Dec 15 13:27:02.699: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.585326827s Dec 15 13:27:03.713: INFO: Verifying statefulset ss doesn't scale past 3 for another 542.459604ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9448 Dec 15 13:27:04.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 15 13:27:05.263: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Dec 15 13:27:05.263: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 15 13:27:05.263: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 15 13:27:05.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9448 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 15 13:27:05.669: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Dec 15 13:27:05.670: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 15 13:27:05.670: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 15 13:27:05.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 15 13:27:06.461: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Dec 15 13:27:06.462: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 15 13:27:06.462: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 15 13:27:06.462: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Dec 15 13:27:36.561: INFO: Deleting all statefulset in ns statefulset-9448 Dec 15 13:27:36.567: INFO: Scaling statefulset ss to 0 Dec 15 13:27:36.580: INFO: Waiting for statefulset status.replicas updated to 0 Dec 15 13:27:36.584: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:27:36.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9448" for this suite. Dec 15 13:27:42.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:27:42.848: INFO: namespace statefulset-9448 deletion completed in 6.189741786s • [SLOW TEST:112.380 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:27:42.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Dec 15 13:27:53.621: INFO: Successfully updated pod "annotationupdate6ca9c830-4f8c-4faf-8a20-2e002b8f37eb" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:27:55.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3408" for this suite. Dec 15 13:28:17.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:28:17.936: INFO: namespace downward-api-3408 deletion completed in 22.21835022s • [SLOW TEST:35.088 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:28:17.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-2e8cc128-034d-41f6-ac36-2c5236c07008 STEP: Creating a pod to test consume secrets Dec 15 13:28:18.053: INFO: Waiting up to 5m0s for pod "pod-secrets-91baa628-a2b4-4447-9d8b-1610a19d4636" in namespace "secrets-1916" to be "success or failure" Dec 15 13:28:18.139: INFO: Pod "pod-secrets-91baa628-a2b4-4447-9d8b-1610a19d4636": Phase="Pending", Reason="", readiness=false. Elapsed: 86.221512ms Dec 15 13:28:20.150: INFO: Pod "pod-secrets-91baa628-a2b4-4447-9d8b-1610a19d4636": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097224232s Dec 15 13:28:22.201: INFO: Pod "pod-secrets-91baa628-a2b4-4447-9d8b-1610a19d4636": Phase="Pending", Reason="", readiness=false. Elapsed: 4.147506024s Dec 15 13:28:24.215: INFO: Pod "pod-secrets-91baa628-a2b4-4447-9d8b-1610a19d4636": Phase="Pending", Reason="", readiness=false. Elapsed: 6.161802985s Dec 15 13:28:26.224: INFO: Pod "pod-secrets-91baa628-a2b4-4447-9d8b-1610a19d4636": Phase="Pending", Reason="", readiness=false. Elapsed: 8.170531108s Dec 15 13:28:28.232: INFO: Pod "pod-secrets-91baa628-a2b4-4447-9d8b-1610a19d4636": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.178709113s STEP: Saw pod success Dec 15 13:28:28.232: INFO: Pod "pod-secrets-91baa628-a2b4-4447-9d8b-1610a19d4636" satisfied condition "success or failure" Dec 15 13:28:28.236: INFO: Trying to get logs from node iruya-node pod pod-secrets-91baa628-a2b4-4447-9d8b-1610a19d4636 container secret-env-test: STEP: delete the pod Dec 15 13:28:28.561: INFO: Waiting for pod pod-secrets-91baa628-a2b4-4447-9d8b-1610a19d4636 to disappear Dec 15 13:28:28.570: INFO: Pod pod-secrets-91baa628-a2b4-4447-9d8b-1610a19d4636 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:28:28.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1916" for this suite. Dec 15 13:28:34.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:28:34.801: INFO: namespace secrets-1916 deletion completed in 6.223124089s • [SLOW TEST:16.864 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:28:34.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-a3ee233c-71a2-4775-a338-590b86306cdd in namespace container-probe-9057 Dec 15 13:28:42.939: INFO: Started pod liveness-a3ee233c-71a2-4775-a338-590b86306cdd in namespace container-probe-9057 STEP: checking the pod's current state and verifying that restartCount is present Dec 15 13:28:42.944: INFO: Initial restart count of pod liveness-a3ee233c-71a2-4775-a338-590b86306cdd is 0 Dec 15 13:28:57.043: INFO: Restart count of pod container-probe-9057/liveness-a3ee233c-71a2-4775-a338-590b86306cdd is now 1 (14.098856538s elapsed) Dec 15 13:29:19.156: INFO: Restart count of pod container-probe-9057/liveness-a3ee233c-71a2-4775-a338-590b86306cdd is now 2 (36.211332394s elapsed) Dec 15 13:29:39.258: INFO: Restart count of pod container-probe-9057/liveness-a3ee233c-71a2-4775-a338-590b86306cdd is now 3 (56.313462183s elapsed) Dec 15 13:29:57.461: INFO: Restart count of pod container-probe-9057/liveness-a3ee233c-71a2-4775-a338-590b86306cdd is now 4 (1m14.517015165s elapsed) Dec 15 13:31:05.934: INFO: Restart count of pod container-probe-9057/liveness-a3ee233c-71a2-4775-a338-590b86306cdd is now 5 (2m22.990126532s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:31:05.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9057" for this suite. Dec 15 13:31:12.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:31:12.154: INFO: namespace container-probe-9057 deletion completed in 6.163528977s • [SLOW TEST:157.352 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:31:12.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-4257/configmap-test-2cc01a7b-27e4-4f07-8144-4e4ec0e743ac STEP: Creating a pod to test consume configMaps Dec 15 13:31:12.280: INFO: Waiting up to 5m0s for pod "pod-configmaps-67846217-b9ce-4e7b-9d0c-4ae612a11dc4" in namespace "configmap-4257" to be "success or failure" Dec 15 13:31:12.294: INFO: Pod "pod-configmaps-67846217-b9ce-4e7b-9d0c-4ae612a11dc4": Phase="Pending", Reason="", readiness=false. Elapsed: 14.194939ms Dec 15 13:31:14.302: INFO: Pod "pod-configmaps-67846217-b9ce-4e7b-9d0c-4ae612a11dc4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021755749s Dec 15 13:31:16.311: INFO: Pod "pod-configmaps-67846217-b9ce-4e7b-9d0c-4ae612a11dc4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03110049s Dec 15 13:31:18.323: INFO: Pod "pod-configmaps-67846217-b9ce-4e7b-9d0c-4ae612a11dc4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043105144s Dec 15 13:31:20.335: INFO: Pod "pod-configmaps-67846217-b9ce-4e7b-9d0c-4ae612a11dc4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.054962774s STEP: Saw pod success Dec 15 13:31:20.335: INFO: Pod "pod-configmaps-67846217-b9ce-4e7b-9d0c-4ae612a11dc4" satisfied condition "success or failure" Dec 15 13:31:20.340: INFO: Trying to get logs from node iruya-node pod pod-configmaps-67846217-b9ce-4e7b-9d0c-4ae612a11dc4 container env-test: STEP: delete the pod Dec 15 13:31:20.407: INFO: Waiting for pod pod-configmaps-67846217-b9ce-4e7b-9d0c-4ae612a11dc4 to disappear Dec 15 13:31:20.412: INFO: Pod pod-configmaps-67846217-b9ce-4e7b-9d0c-4ae612a11dc4 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:31:20.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4257" for this suite. Dec 15 13:31:26.439: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:31:26.573: INFO: namespace configmap-4257 deletion completed in 6.155967595s • [SLOW TEST:14.418 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:31:26.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-8c003071-36a9-45ce-9626-6658a754491c STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-8c003071-36a9-45ce-9626-6658a754491c STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:32:54.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5763" for this suite. Dec 15 13:33:32.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:33:32.937: INFO: namespace projected-5763 deletion completed in 38.1704622s • [SLOW TEST:126.364 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:33:32.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:33:41.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9025" for this suite. Dec 15 13:34:33.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:34:33.329: INFO: namespace kubelet-test-9025 deletion completed in 52.198077535s • [SLOW TEST:60.391 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:34:33.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W1215 13:34:35.416103 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 15 13:34:35.416: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:34:35.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1592" for this suite. Dec 15 13:34:43.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:34:43.988: INFO: namespace gc-1592 deletion completed in 8.530810308s • [SLOW TEST:10.659 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:34:43.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4987.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4987.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4987.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4987.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 15 13:34:56.600: INFO: File wheezy_udp@dns-test-service-3.dns-4987.svc.cluster.local from pod dns-4987/dns-test-63f351c8-39ba-46e9-9130-058fab3a0348 contains '' instead of 'foo.example.com.' Dec 15 13:34:56.619: INFO: Lookups using dns-4987/dns-test-63f351c8-39ba-46e9-9130-058fab3a0348 failed for: [wheezy_udp@dns-test-service-3.dns-4987.svc.cluster.local] Dec 15 13:35:01.647: INFO: DNS probes using dns-test-63f351c8-39ba-46e9-9130-058fab3a0348 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4987.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4987.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4987.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4987.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 15 13:35:17.931: INFO: File wheezy_udp@dns-test-service-3.dns-4987.svc.cluster.local from pod dns-4987/dns-test-aae3383a-ce5c-4534-b3ed-77d8de133e31 contains '' instead of 'bar.example.com.' Dec 15 13:35:17.953: INFO: File jessie_udp@dns-test-service-3.dns-4987.svc.cluster.local from pod dns-4987/dns-test-aae3383a-ce5c-4534-b3ed-77d8de133e31 contains '' instead of 'bar.example.com.' Dec 15 13:35:17.953: INFO: Lookups using dns-4987/dns-test-aae3383a-ce5c-4534-b3ed-77d8de133e31 failed for: [wheezy_udp@dns-test-service-3.dns-4987.svc.cluster.local jessie_udp@dns-test-service-3.dns-4987.svc.cluster.local] Dec 15 13:35:22.976: INFO: File wheezy_udp@dns-test-service-3.dns-4987.svc.cluster.local from pod dns-4987/dns-test-aae3383a-ce5c-4534-b3ed-77d8de133e31 contains 'foo.example.com. ' instead of 'bar.example.com.' Dec 15 13:35:22.987: INFO: File jessie_udp@dns-test-service-3.dns-4987.svc.cluster.local from pod dns-4987/dns-test-aae3383a-ce5c-4534-b3ed-77d8de133e31 contains 'foo.example.com. ' instead of 'bar.example.com.' Dec 15 13:35:22.987: INFO: Lookups using dns-4987/dns-test-aae3383a-ce5c-4534-b3ed-77d8de133e31 failed for: [wheezy_udp@dns-test-service-3.dns-4987.svc.cluster.local jessie_udp@dns-test-service-3.dns-4987.svc.cluster.local] Dec 15 13:35:27.974: INFO: DNS probes using dns-test-aae3383a-ce5c-4534-b3ed-77d8de133e31 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4987.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4987.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4987.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-4987.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 15 13:35:45.213: INFO: File jessie_udp@dns-test-service-3.dns-4987.svc.cluster.local from pod dns-4987/dns-test-9afe927a-92ba-4161-949b-f45a2d3e92d9 contains '' instead of '10.104.88.212' Dec 15 13:35:45.213: INFO: Lookups using dns-4987/dns-test-9afe927a-92ba-4161-949b-f45a2d3e92d9 failed for: [jessie_udp@dns-test-service-3.dns-4987.svc.cluster.local] Dec 15 13:35:50.237: INFO: DNS probes using dns-test-9afe927a-92ba-4161-949b-f45a2d3e92d9 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:35:50.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4987" for this suite. Dec 15 13:35:56.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:35:56.693: INFO: namespace dns-4987 deletion completed in 6.149761247s • [SLOW TEST:72.703 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:35:56.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:36:05.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-624" for this suite. Dec 15 13:36:27.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:36:28.009: INFO: namespace replication-controller-624 deletion completed in 22.150147328s • [SLOW TEST:31.315 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:36:28.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Dec 15 13:36:28.103: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9643" to be "success or failure" Dec 15 13:36:28.110: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.849922ms Dec 15 13:36:30.125: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021343909s Dec 15 13:36:32.138: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03397428s Dec 15 13:36:34.159: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055138121s Dec 15 13:36:36.167: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063799648s Dec 15 13:36:38.191: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.087280846s STEP: Saw pod success Dec 15 13:36:38.191: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Dec 15 13:36:38.195: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: STEP: delete the pod Dec 15 13:36:38.401: INFO: Waiting for pod pod-host-path-test to disappear Dec 15 13:36:38.413: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:36:38.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-9643" for this suite. Dec 15 13:36:44.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:36:44.608: INFO: namespace hostpath-9643 deletion completed in 6.187999671s • [SLOW TEST:16.599 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:36:44.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W1215 13:37:15.326531 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 15 13:37:15.326: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:37:15.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9598" for this suite. Dec 15 13:37:23.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:37:23.549: INFO: namespace gc-9598 deletion completed in 8.215756239s • [SLOW TEST:38.939 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:37:23.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Dec 15 13:37:25.016: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:37:42.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3790" for this suite. Dec 15 13:38:04.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:38:04.606: INFO: namespace init-container-3790 deletion completed in 22.251740018s • [SLOW TEST:41.055 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:38:04.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 15 13:38:04.715: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:38:13.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6518" for this suite. Dec 15 13:38:59.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:38:59.555: INFO: namespace pods-6518 deletion completed in 46.237219424s • [SLOW TEST:54.948 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:38:59.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W1215 13:39:09.727129 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 15 13:39:09.727: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:39:09.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8518" for this suite. Dec 15 13:39:15.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:39:15.976: INFO: namespace gc-8518 deletion completed in 6.242447869s • [SLOW TEST:16.421 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:39:15.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:39:26.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6155" for this suite. Dec 15 13:40:10.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:40:10.445: INFO: namespace kubelet-test-6155 deletion completed in 44.208228163s • [SLOW TEST:54.467 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:40:10.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 15 13:40:10.610: INFO: Creating ReplicaSet my-hostname-basic-710ba186-b6b2-41d8-9e78-eab46c21a02d Dec 15 13:40:10.630: INFO: Pod name my-hostname-basic-710ba186-b6b2-41d8-9e78-eab46c21a02d: Found 0 pods out of 1 Dec 15 13:40:15.645: INFO: Pod name my-hostname-basic-710ba186-b6b2-41d8-9e78-eab46c21a02d: Found 1 pods out of 1 Dec 15 13:40:15.645: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-710ba186-b6b2-41d8-9e78-eab46c21a02d" is running Dec 15 13:40:19.662: INFO: Pod "my-hostname-basic-710ba186-b6b2-41d8-9e78-eab46c21a02d-5sm6p" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-15 13:40:10 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-15 13:40:10 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-710ba186-b6b2-41d8-9e78-eab46c21a02d]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-15 13:40:10 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-710ba186-b6b2-41d8-9e78-eab46c21a02d]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-15 13:40:10 +0000 UTC Reason: Message:}]) Dec 15 13:40:19.662: INFO: Trying to dial the pod Dec 15 13:40:24.700: INFO: Controller my-hostname-basic-710ba186-b6b2-41d8-9e78-eab46c21a02d: Got expected result from replica 1 [my-hostname-basic-710ba186-b6b2-41d8-9e78-eab46c21a02d-5sm6p]: "my-hostname-basic-710ba186-b6b2-41d8-9e78-eab46c21a02d-5sm6p", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:40:24.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8595" for this suite. Dec 15 13:40:30.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:40:30.848: INFO: namespace replicaset-8595 deletion completed in 6.141272144s • [SLOW TEST:20.401 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:40:30.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-6950, will wait for the garbage collector to delete the pods Dec 15 13:40:43.101: INFO: Deleting Job.batch foo took: 29.397884ms Dec 15 13:40:43.403: INFO: Terminating Job.batch foo pods took: 301.462173ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:41:26.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6950" for this suite. Dec 15 13:41:32.782: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:41:32.900: INFO: namespace job-6950 deletion completed in 6.154294207s • [SLOW TEST:62.050 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:41:32.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Dec 15 13:41:32.967: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:41:46.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3272" for this suite. Dec 15 13:41:52.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:41:52.753: INFO: namespace init-container-3272 deletion completed in 6.170923071s • [SLOW TEST:19.852 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:41:52.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-8dxh STEP: Creating a pod to test atomic-volume-subpath Dec 15 13:41:53.015: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-8dxh" in namespace "subpath-480" to be "success or failure" Dec 15 13:41:53.031: INFO: Pod "pod-subpath-test-configmap-8dxh": Phase="Pending", Reason="", readiness=false. Elapsed: 15.526212ms Dec 15 13:41:55.086: INFO: Pod "pod-subpath-test-configmap-8dxh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070266317s Dec 15 13:41:57.116: INFO: Pod "pod-subpath-test-configmap-8dxh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100436539s Dec 15 13:41:59.129: INFO: Pod "pod-subpath-test-configmap-8dxh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113272861s Dec 15 13:42:01.139: INFO: Pod "pod-subpath-test-configmap-8dxh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.123545816s Dec 15 13:42:03.149: INFO: Pod "pod-subpath-test-configmap-8dxh": Phase="Running", Reason="", readiness=true. Elapsed: 10.134155064s Dec 15 13:42:05.180: INFO: Pod "pod-subpath-test-configmap-8dxh": Phase="Running", Reason="", readiness=true. Elapsed: 12.165017044s Dec 15 13:42:07.194: INFO: Pod "pod-subpath-test-configmap-8dxh": Phase="Running", Reason="", readiness=true. Elapsed: 14.178684454s Dec 15 13:42:09.244: INFO: Pod "pod-subpath-test-configmap-8dxh": Phase="Running", Reason="", readiness=true. Elapsed: 16.228935921s Dec 15 13:42:11.253: INFO: Pod "pod-subpath-test-configmap-8dxh": Phase="Running", Reason="", readiness=true. Elapsed: 18.237982404s Dec 15 13:42:13.300: INFO: Pod "pod-subpath-test-configmap-8dxh": Phase="Running", Reason="", readiness=true. Elapsed: 20.285035846s Dec 15 13:42:15.313: INFO: Pod "pod-subpath-test-configmap-8dxh": Phase="Running", Reason="", readiness=true. Elapsed: 22.29732443s Dec 15 13:42:17.339: INFO: Pod "pod-subpath-test-configmap-8dxh": Phase="Running", Reason="", readiness=true. Elapsed: 24.323724229s Dec 15 13:42:19.348: INFO: Pod "pod-subpath-test-configmap-8dxh": Phase="Running", Reason="", readiness=true. Elapsed: 26.333147195s Dec 15 13:42:21.409: INFO: Pod "pod-subpath-test-configmap-8dxh": Phase="Running", Reason="", readiness=true. Elapsed: 28.39423119s Dec 15 13:42:23.426: INFO: Pod "pod-subpath-test-configmap-8dxh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.410740008s STEP: Saw pod success Dec 15 13:42:23.426: INFO: Pod "pod-subpath-test-configmap-8dxh" satisfied condition "success or failure" Dec 15 13:42:23.435: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-8dxh container test-container-subpath-configmap-8dxh: STEP: delete the pod Dec 15 13:42:23.593: INFO: Waiting for pod pod-subpath-test-configmap-8dxh to disappear Dec 15 13:42:23.605: INFO: Pod pod-subpath-test-configmap-8dxh no longer exists STEP: Deleting pod pod-subpath-test-configmap-8dxh Dec 15 13:42:23.605: INFO: Deleting pod "pod-subpath-test-configmap-8dxh" in namespace "subpath-480" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:42:23.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-480" for this suite. Dec 15 13:42:29.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:42:29.945: INFO: namespace subpath-480 deletion completed in 6.267560155s • [SLOW TEST:37.189 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:42:29.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:42:36.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8837" for this suite. Dec 15 13:42:42.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:42:42.697: INFO: namespace namespaces-8837 deletion completed in 6.272642321s STEP: Destroying namespace "nsdeletetest-7957" for this suite. Dec 15 13:42:42.698: INFO: Namespace nsdeletetest-7957 was already deleted STEP: Destroying namespace "nsdeletetest-464" for this suite. Dec 15 13:42:48.750: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:42:48.872: INFO: namespace nsdeletetest-464 deletion completed in 6.17324639s • [SLOW TEST:18.926 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:42:48.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W1215 13:43:29.622227 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 15 13:43:29.622: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:43:29.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1923" for this suite. Dec 15 13:43:38.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:43:39.660: INFO: namespace gc-1923 deletion completed in 10.03294331s • [SLOW TEST:50.787 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:43:39.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-7344 I1215 13:43:40.598306 8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-7344, replica count: 1 I1215 13:43:41.650674 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1215 13:43:42.651322 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1215 13:43:43.652257 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1215 13:43:44.652931 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1215 13:43:45.653511 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1215 13:43:46.654148 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1215 13:43:47.654976 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1215 13:43:48.655628 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1215 13:43:49.656398 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1215 13:43:50.657039 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1215 13:43:51.658098 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1215 13:43:52.659533 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1215 13:43:53.660410 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1215 13:43:54.661729 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1215 13:43:55.663453 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1215 13:43:56.664558 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 15 13:43:56.832: INFO: Created: latency-svc-hkhr4 Dec 15 13:43:56.845: INFO: Got endpoints: latency-svc-hkhr4 [79.889652ms] Dec 15 13:43:56.991: INFO: Created: latency-svc-wjtz7 Dec 15 13:43:57.108: INFO: Got endpoints: latency-svc-wjtz7 [262.583787ms] Dec 15 13:43:57.147: INFO: Created: latency-svc-xsprj Dec 15 13:43:57.147: INFO: Got endpoints: latency-svc-xsprj [297.145984ms] Dec 15 13:43:57.202: INFO: Created: latency-svc-jml6l Dec 15 13:43:57.296: INFO: Got endpoints: latency-svc-jml6l [447.225945ms] Dec 15 13:43:57.332: INFO: Created: latency-svc-w67x2 Dec 15 13:43:57.353: INFO: Got endpoints: latency-svc-w67x2 [505.207308ms] Dec 15 13:43:57.504: INFO: Created: latency-svc-2v6nw Dec 15 13:43:57.573: INFO: Got endpoints: latency-svc-2v6nw [727.784793ms] Dec 15 13:43:57.582: INFO: Created: latency-svc-shd89 Dec 15 13:43:57.713: INFO: Got endpoints: latency-svc-shd89 [863.432959ms] Dec 15 13:43:57.749: INFO: Created: latency-svc-hgp26 Dec 15 13:43:57.757: INFO: Got endpoints: latency-svc-hgp26 [909.186921ms] Dec 15 13:43:57.806: INFO: Created: latency-svc-v6655 Dec 15 13:43:57.873: INFO: Got endpoints: latency-svc-v6655 [1.024984579s] Dec 15 13:43:57.963: INFO: Created: latency-svc-lxn77 Dec 15 13:43:57.967: INFO: Got endpoints: latency-svc-lxn77 [1.119946953s] Dec 15 13:43:58.067: INFO: Created: latency-svc-fxdvk Dec 15 13:43:58.093: INFO: Got endpoints: latency-svc-fxdvk [1.244440123s] Dec 15 13:43:58.138: INFO: Created: latency-svc-bncfx Dec 15 13:43:58.146: INFO: Got endpoints: latency-svc-bncfx [1.297179529s] Dec 15 13:43:58.235: INFO: Created: latency-svc-fdgt5 Dec 15 13:43:58.268: INFO: Got endpoints: latency-svc-fdgt5 [1.419932558s] Dec 15 13:43:58.379: INFO: Created: latency-svc-qdrg5 Dec 15 13:43:58.396: INFO: Got endpoints: latency-svc-qdrg5 [1.54681764s] Dec 15 13:43:58.457: INFO: Created: latency-svc-7tt26 Dec 15 13:43:58.614: INFO: Got endpoints: latency-svc-7tt26 [1.766468848s] Dec 15 13:43:58.662: INFO: Created: latency-svc-gm78c Dec 15 13:43:58.663: INFO: Got endpoints: latency-svc-gm78c [1.813505792s] Dec 15 13:43:58.786: INFO: Created: latency-svc-99sx7 Dec 15 13:43:58.803: INFO: Got endpoints: latency-svc-99sx7 [1.694274119s] Dec 15 13:43:58.856: INFO: Created: latency-svc-gqjx9 Dec 15 13:43:58.956: INFO: Got endpoints: latency-svc-gqjx9 [1.809158218s] Dec 15 13:43:58.962: INFO: Created: latency-svc-vzfjh Dec 15 13:43:58.966: INFO: Got endpoints: latency-svc-vzfjh [1.669802991s] Dec 15 13:43:59.023: INFO: Created: latency-svc-k26gk Dec 15 13:43:59.163: INFO: Got endpoints: latency-svc-k26gk [1.810475974s] Dec 15 13:43:59.163: INFO: Created: latency-svc-sttjt Dec 15 13:43:59.178: INFO: Got endpoints: latency-svc-sttjt [1.603801347s] Dec 15 13:43:59.301: INFO: Created: latency-svc-q5f7v Dec 15 13:43:59.307: INFO: Got endpoints: latency-svc-q5f7v [1.59389221s] Dec 15 13:43:59.378: INFO: Created: latency-svc-stq2c Dec 15 13:43:59.526: INFO: Got endpoints: latency-svc-stq2c [1.769705023s] Dec 15 13:43:59.562: INFO: Created: latency-svc-z9fz6 Dec 15 13:43:59.570: INFO: Got endpoints: latency-svc-z9fz6 [1.697097112s] Dec 15 13:43:59.699: INFO: Created: latency-svc-gq4bf Dec 15 13:43:59.727: INFO: Got endpoints: latency-svc-gq4bf [1.759436221s] Dec 15 13:43:59.766: INFO: Created: latency-svc-sbblp Dec 15 13:43:59.767: INFO: Got endpoints: latency-svc-sbblp [1.673559957s] Dec 15 13:43:59.863: INFO: Created: latency-svc-hv8b5 Dec 15 13:43:59.879: INFO: Got endpoints: latency-svc-hv8b5 [1.732768335s] Dec 15 13:43:59.951: INFO: Created: latency-svc-t8p89 Dec 15 13:44:00.051: INFO: Got endpoints: latency-svc-t8p89 [1.782459383s] Dec 15 13:44:00.063: INFO: Created: latency-svc-5xnp5 Dec 15 13:44:00.073: INFO: Got endpoints: latency-svc-5xnp5 [1.676202788s] Dec 15 13:44:00.136: INFO: Created: latency-svc-c9mwq Dec 15 13:44:00.207: INFO: Got endpoints: latency-svc-c9mwq [1.591624295s] Dec 15 13:44:00.213: INFO: Created: latency-svc-59w54 Dec 15 13:44:00.240: INFO: Got endpoints: latency-svc-59w54 [1.577273405s] Dec 15 13:44:00.263: INFO: Created: latency-svc-j9ds9 Dec 15 13:44:00.278: INFO: Got endpoints: latency-svc-j9ds9 [1.475020384s] Dec 15 13:44:00.484: INFO: Created: latency-svc-8bjq7 Dec 15 13:44:00.487: INFO: Got endpoints: latency-svc-8bjq7 [1.530443888s] Dec 15 13:44:00.571: INFO: Created: latency-svc-fpcdn Dec 15 13:44:00.659: INFO: Got endpoints: latency-svc-fpcdn [1.692645258s] Dec 15 13:44:00.707: INFO: Created: latency-svc-t4hcl Dec 15 13:44:00.708: INFO: Got endpoints: latency-svc-t4hcl [1.544081188s] Dec 15 13:44:00.813: INFO: Created: latency-svc-htqc8 Dec 15 13:44:00.826: INFO: Got endpoints: latency-svc-htqc8 [1.647875521s] Dec 15 13:44:00.861: INFO: Created: latency-svc-7qjdf Dec 15 13:44:00.913: INFO: Got endpoints: latency-svc-7qjdf [1.605301105s] Dec 15 13:44:00.915: INFO: Created: latency-svc-5qbsq Dec 15 13:44:00.983: INFO: Got endpoints: latency-svc-5qbsq [1.455820175s] Dec 15 13:44:01.025: INFO: Created: latency-svc-twz2b Dec 15 13:44:01.041: INFO: Got endpoints: latency-svc-twz2b [1.470220732s] Dec 15 13:44:01.084: INFO: Created: latency-svc-k9qpl Dec 15 13:44:01.136: INFO: Got endpoints: latency-svc-k9qpl [1.408503068s] Dec 15 13:44:01.170: INFO: Created: latency-svc-hn7pz Dec 15 13:44:01.176: INFO: Got endpoints: latency-svc-hn7pz [1.408750441s] Dec 15 13:44:01.332: INFO: Created: latency-svc-qgc8l Dec 15 13:44:01.340: INFO: Got endpoints: latency-svc-qgc8l [1.460186124s] Dec 15 13:44:01.414: INFO: Created: latency-svc-5dfb2 Dec 15 13:44:01.421: INFO: Got endpoints: latency-svc-5dfb2 [1.369843144s] Dec 15 13:44:01.538: INFO: Created: latency-svc-hl9cx Dec 15 13:44:01.569: INFO: Got endpoints: latency-svc-hl9cx [1.4954628s] Dec 15 13:44:01.620: INFO: Created: latency-svc-2kjxx Dec 15 13:44:01.735: INFO: Got endpoints: latency-svc-2kjxx [1.527082748s] Dec 15 13:44:01.739: INFO: Created: latency-svc-6g57s Dec 15 13:44:01.753: INFO: Got endpoints: latency-svc-6g57s [1.512504771s] Dec 15 13:44:01.785: INFO: Created: latency-svc-mznjr Dec 15 13:44:01.802: INFO: Got endpoints: latency-svc-mznjr [1.523888853s] Dec 15 13:44:01.897: INFO: Created: latency-svc-t8zfz Dec 15 13:44:01.917: INFO: Got endpoints: latency-svc-t8zfz [1.430265008s] Dec 15 13:44:01.958: INFO: Created: latency-svc-j9jfh Dec 15 13:44:01.969: INFO: Got endpoints: latency-svc-j9jfh [1.310017297s] Dec 15 13:44:02.080: INFO: Created: latency-svc-f6tnl Dec 15 13:44:02.093: INFO: Got endpoints: latency-svc-f6tnl [1.384483413s] Dec 15 13:44:02.149: INFO: Created: latency-svc-r75rf Dec 15 13:44:02.172: INFO: Got endpoints: latency-svc-r75rf [1.345542078s] Dec 15 13:44:02.248: INFO: Created: latency-svc-7p4lx Dec 15 13:44:02.268: INFO: Got endpoints: latency-svc-7p4lx [1.353960166s] Dec 15 13:44:02.328: INFO: Created: latency-svc-zcfhz Dec 15 13:44:02.339: INFO: Got endpoints: latency-svc-zcfhz [1.355974346s] Dec 15 13:44:02.459: INFO: Created: latency-svc-zmj59 Dec 15 13:44:02.518: INFO: Got endpoints: latency-svc-zmj59 [1.477063022s] Dec 15 13:44:02.525: INFO: Created: latency-svc-kvz9b Dec 15 13:44:02.627: INFO: Got endpoints: latency-svc-kvz9b [1.491191781s] Dec 15 13:44:02.666: INFO: Created: latency-svc-fjpfk Dec 15 13:44:02.681: INFO: Got endpoints: latency-svc-fjpfk [1.505211795s] Dec 15 13:44:02.837: INFO: Created: latency-svc-s94h4 Dec 15 13:44:02.864: INFO: Got endpoints: latency-svc-s94h4 [1.52353379s] Dec 15 13:44:02.890: INFO: Created: latency-svc-q759b Dec 15 13:44:02.901: INFO: Got endpoints: latency-svc-q759b [1.479663372s] Dec 15 13:44:02.977: INFO: Created: latency-svc-7ldwh Dec 15 13:44:02.988: INFO: Got endpoints: latency-svc-7ldwh [1.419006513s] Dec 15 13:44:03.034: INFO: Created: latency-svc-bttvr Dec 15 13:44:03.159: INFO: Got endpoints: latency-svc-bttvr [1.424487023s] Dec 15 13:44:03.166: INFO: Created: latency-svc-zjr97 Dec 15 13:44:03.169: INFO: Got endpoints: latency-svc-zjr97 [1.415952278s] Dec 15 13:44:03.238: INFO: Created: latency-svc-wjm9n Dec 15 13:44:03.249: INFO: Got endpoints: latency-svc-wjm9n [1.446355144s] Dec 15 13:44:03.359: INFO: Created: latency-svc-2s4vg Dec 15 13:44:03.400: INFO: Got endpoints: latency-svc-2s4vg [1.482782534s] Dec 15 13:44:03.426: INFO: Created: latency-svc-dgkjp Dec 15 13:44:03.495: INFO: Got endpoints: latency-svc-dgkjp [1.525545871s] Dec 15 13:44:03.536: INFO: Created: latency-svc-x6krf Dec 15 13:44:03.536: INFO: Got endpoints: latency-svc-x6krf [1.443326705s] Dec 15 13:44:03.741: INFO: Created: latency-svc-9gkbf Dec 15 13:44:03.758: INFO: Got endpoints: latency-svc-9gkbf [1.58569645s] Dec 15 13:44:03.811: INFO: Created: latency-svc-zfvwl Dec 15 13:44:03.933: INFO: Got endpoints: latency-svc-zfvwl [1.665267297s] Dec 15 13:44:03.990: INFO: Created: latency-svc-k8mt9 Dec 15 13:44:04.025: INFO: Got endpoints: latency-svc-k8mt9 [1.685613717s] Dec 15 13:44:04.161: INFO: Created: latency-svc-n859h Dec 15 13:44:04.173: INFO: Got endpoints: latency-svc-n859h [1.654628951s] Dec 15 13:44:04.288: INFO: Created: latency-svc-qbmz5 Dec 15 13:44:04.343: INFO: Got endpoints: latency-svc-qbmz5 [1.715073111s] Dec 15 13:44:04.350: INFO: Created: latency-svc-8cmps Dec 15 13:44:04.442: INFO: Got endpoints: latency-svc-8cmps [1.760557182s] Dec 15 13:44:04.455: INFO: Created: latency-svc-cggsj Dec 15 13:44:04.489: INFO: Got endpoints: latency-svc-cggsj [1.624598226s] Dec 15 13:44:04.521: INFO: Created: latency-svc-drpx6 Dec 15 13:44:04.531: INFO: Got endpoints: latency-svc-drpx6 [1.629625543s] Dec 15 13:44:04.744: INFO: Created: latency-svc-dvlnc Dec 15 13:44:04.769: INFO: Got endpoints: latency-svc-dvlnc [1.780400398s] Dec 15 13:44:04.809: INFO: Created: latency-svc-s5bjd Dec 15 13:44:04.979: INFO: Got endpoints: latency-svc-s5bjd [1.819442219s] Dec 15 13:44:05.004: INFO: Created: latency-svc-h8kfv Dec 15 13:44:05.010: INFO: Got endpoints: latency-svc-h8kfv [1.841323909s] Dec 15 13:44:05.063: INFO: Created: latency-svc-rmmpm Dec 15 13:44:05.066: INFO: Got endpoints: latency-svc-rmmpm [1.817153071s] Dec 15 13:44:05.164: INFO: Created: latency-svc-v88m7 Dec 15 13:44:05.180: INFO: Got endpoints: latency-svc-v88m7 [1.779415652s] Dec 15 13:44:05.217: INFO: Created: latency-svc-2q22t Dec 15 13:44:05.223: INFO: Got endpoints: latency-svc-2q22t [1.727742862s] Dec 15 13:44:05.384: INFO: Created: latency-svc-x6d4h Dec 15 13:44:05.417: INFO: Got endpoints: latency-svc-x6d4h [1.881067479s] Dec 15 13:44:05.461: INFO: Created: latency-svc-bm87d Dec 15 13:44:05.478: INFO: Got endpoints: latency-svc-bm87d [1.719530125s] Dec 15 13:44:05.595: INFO: Created: latency-svc-vzh6f Dec 15 13:44:05.596: INFO: Got endpoints: latency-svc-vzh6f [1.662846483s] Dec 15 13:44:05.636: INFO: Created: latency-svc-zvnss Dec 15 13:44:05.652: INFO: Got endpoints: latency-svc-zvnss [1.625877147s] Dec 15 13:44:05.757: INFO: Created: latency-svc-65sg5 Dec 15 13:44:05.758: INFO: Got endpoints: latency-svc-65sg5 [1.58402332s] Dec 15 13:44:05.819: INFO: Created: latency-svc-l4fg7 Dec 15 13:44:05.840: INFO: Got endpoints: latency-svc-l4fg7 [1.496482318s] Dec 15 13:44:05.938: INFO: Created: latency-svc-qwt8t Dec 15 13:44:05.947: INFO: Got endpoints: latency-svc-qwt8t [1.504214628s] Dec 15 13:44:06.006: INFO: Created: latency-svc-tx2dj Dec 15 13:44:06.011: INFO: Got endpoints: latency-svc-tx2dj [1.521039337s] Dec 15 13:44:06.124: INFO: Created: latency-svc-p8w25 Dec 15 13:44:06.135: INFO: Got endpoints: latency-svc-p8w25 [1.604189115s] Dec 15 13:44:06.179: INFO: Created: latency-svc-hvpvj Dec 15 13:44:06.184: INFO: Got endpoints: latency-svc-hvpvj [1.4155323s] Dec 15 13:44:06.296: INFO: Created: latency-svc-878tw Dec 15 13:44:06.304: INFO: Got endpoints: latency-svc-878tw [1.324787612s] Dec 15 13:44:06.347: INFO: Created: latency-svc-krlcm Dec 15 13:44:06.363: INFO: Got endpoints: latency-svc-krlcm [1.35175345s] Dec 15 13:44:06.494: INFO: Created: latency-svc-vs2ps Dec 15 13:44:06.506: INFO: Got endpoints: latency-svc-vs2ps [1.438826674s] Dec 15 13:44:06.565: INFO: Created: latency-svc-tcqg7 Dec 15 13:44:06.579: INFO: Got endpoints: latency-svc-tcqg7 [1.399086609s] Dec 15 13:44:06.741: INFO: Created: latency-svc-tdwp5 Dec 15 13:44:06.751: INFO: Got endpoints: latency-svc-tdwp5 [1.527648606s] Dec 15 13:44:06.819: INFO: Created: latency-svc-n9vzs Dec 15 13:44:06.826: INFO: Got endpoints: latency-svc-n9vzs [1.408307524s] Dec 15 13:44:06.948: INFO: Created: latency-svc-gd99l Dec 15 13:44:06.959: INFO: Got endpoints: latency-svc-gd99l [1.481085859s] Dec 15 13:44:07.015: INFO: Created: latency-svc-hl9bm Dec 15 13:44:07.109: INFO: Got endpoints: latency-svc-hl9bm [1.512545921s] Dec 15 13:44:07.133: INFO: Created: latency-svc-c5q9r Dec 15 13:44:07.146: INFO: Got endpoints: latency-svc-c5q9r [1.494064356s] Dec 15 13:44:07.224: INFO: Created: latency-svc-2xjvw Dec 15 13:44:07.373: INFO: Got endpoints: latency-svc-2xjvw [1.614460513s] Dec 15 13:44:07.377: INFO: Created: latency-svc-xmn9x Dec 15 13:44:07.385: INFO: Got endpoints: latency-svc-xmn9x [1.544224852s] Dec 15 13:44:07.458: INFO: Created: latency-svc-vqdbt Dec 15 13:44:07.468: INFO: Got endpoints: latency-svc-vqdbt [1.521197483s] Dec 15 13:44:07.631: INFO: Created: latency-svc-rtq2w Dec 15 13:44:07.894: INFO: Got endpoints: latency-svc-rtq2w [1.883407274s] Dec 15 13:44:07.908: INFO: Created: latency-svc-fnt52 Dec 15 13:44:08.075: INFO: Got endpoints: latency-svc-fnt52 [1.93944939s] Dec 15 13:44:08.104: INFO: Created: latency-svc-jfl5b Dec 15 13:44:08.164: INFO: Got endpoints: latency-svc-jfl5b [1.979283771s] Dec 15 13:44:08.170: INFO: Created: latency-svc-jsjqf Dec 15 13:44:08.260: INFO: Got endpoints: latency-svc-jsjqf [1.955097855s] Dec 15 13:44:08.269: INFO: Created: latency-svc-9d8cp Dec 15 13:44:08.306: INFO: Got endpoints: latency-svc-9d8cp [1.942670306s] Dec 15 13:44:08.307: INFO: Created: latency-svc-tmp25 Dec 15 13:44:08.319: INFO: Got endpoints: latency-svc-tmp25 [1.812926288s] Dec 15 13:44:08.476: INFO: Created: latency-svc-5fxwh Dec 15 13:44:08.492: INFO: Got endpoints: latency-svc-5fxwh [1.912098658s] Dec 15 13:44:08.542: INFO: Created: latency-svc-8xstl Dec 15 13:44:08.833: INFO: Got endpoints: latency-svc-8xstl [2.082014108s] Dec 15 13:44:08.844: INFO: Created: latency-svc-7lmlj Dec 15 13:44:08.855: INFO: Got endpoints: latency-svc-7lmlj [2.027998274s] Dec 15 13:44:08.941: INFO: Created: latency-svc-lhrdp Dec 15 13:44:09.031: INFO: Got endpoints: latency-svc-lhrdp [2.0713722s] Dec 15 13:44:09.078: INFO: Created: latency-svc-rcwvc Dec 15 13:44:09.078: INFO: Got endpoints: latency-svc-rcwvc [1.968566185s] Dec 15 13:44:09.128: INFO: Created: latency-svc-vs96b Dec 15 13:44:09.221: INFO: Got endpoints: latency-svc-vs96b [2.075237087s] Dec 15 13:44:09.229: INFO: Created: latency-svc-plxj9 Dec 15 13:44:09.231: INFO: Got endpoints: latency-svc-plxj9 [1.857751544s] Dec 15 13:44:09.297: INFO: Created: latency-svc-6sms9 Dec 15 13:44:09.308: INFO: Got endpoints: latency-svc-6sms9 [1.923651589s] Dec 15 13:44:09.477: INFO: Created: latency-svc-9fhnl Dec 15 13:44:09.477: INFO: Got endpoints: latency-svc-9fhnl [2.008152081s] Dec 15 13:44:09.532: INFO: Created: latency-svc-cj5rx Dec 15 13:44:09.662: INFO: Got endpoints: latency-svc-cj5rx [1.767451212s] Dec 15 13:44:09.683: INFO: Created: latency-svc-jk9ws Dec 15 13:44:09.683: INFO: Got endpoints: latency-svc-jk9ws [1.60814033s] Dec 15 13:44:09.722: INFO: Created: latency-svc-tc7qr Dec 15 13:44:09.726: INFO: Got endpoints: latency-svc-tc7qr [1.561378834s] Dec 15 13:44:09.871: INFO: Created: latency-svc-tdcbc Dec 15 13:44:09.884: INFO: Got endpoints: latency-svc-tdcbc [1.623717273s] Dec 15 13:44:09.937: INFO: Created: latency-svc-5tmw8 Dec 15 13:44:10.041: INFO: Created: latency-svc-bv2wf Dec 15 13:44:10.047: INFO: Got endpoints: latency-svc-5tmw8 [1.74083005s] Dec 15 13:44:10.048: INFO: Got endpoints: latency-svc-bv2wf [1.729131074s] Dec 15 13:44:10.087: INFO: Created: latency-svc-wgmlb Dec 15 13:44:10.097: INFO: Got endpoints: latency-svc-wgmlb [1.605032404s] Dec 15 13:44:10.123: INFO: Created: latency-svc-b42c8 Dec 15 13:44:10.266: INFO: Got endpoints: latency-svc-b42c8 [1.433207955s] Dec 15 13:44:10.287: INFO: Created: latency-svc-f9wq5 Dec 15 13:44:10.323: INFO: Got endpoints: latency-svc-f9wq5 [1.468197966s] Dec 15 13:44:10.373: INFO: Created: latency-svc-bfjrq Dec 15 13:44:10.470: INFO: Got endpoints: latency-svc-bfjrq [1.43847609s] Dec 15 13:44:10.490: INFO: Created: latency-svc-d2w2q Dec 15 13:44:10.496: INFO: Got endpoints: latency-svc-d2w2q [1.417616473s] Dec 15 13:44:10.552: INFO: Created: latency-svc-n5q5p Dec 15 13:44:10.685: INFO: Created: latency-svc-m696q Dec 15 13:44:10.688: INFO: Got endpoints: latency-svc-n5q5p [1.466449074s] Dec 15 13:44:10.707: INFO: Got endpoints: latency-svc-m696q [1.476057424s] Dec 15 13:44:10.942: INFO: Created: latency-svc-cdgfl Dec 15 13:44:10.949: INFO: Got endpoints: latency-svc-cdgfl [1.640334352s] Dec 15 13:44:11.018: INFO: Created: latency-svc-qq4hc Dec 15 13:44:11.128: INFO: Got endpoints: latency-svc-qq4hc [1.650807909s] Dec 15 13:44:11.132: INFO: Created: latency-svc-dksjr Dec 15 13:44:11.153: INFO: Got endpoints: latency-svc-dksjr [1.490735142s] Dec 15 13:44:11.186: INFO: Created: latency-svc-g9t9s Dec 15 13:44:11.193: INFO: Got endpoints: latency-svc-g9t9s [1.509126576s] Dec 15 13:44:11.396: INFO: Created: latency-svc-dbb2f Dec 15 13:44:11.405: INFO: Got endpoints: latency-svc-dbb2f [1.678549702s] Dec 15 13:44:11.671: INFO: Created: latency-svc-b5h5l Dec 15 13:44:11.673: INFO: Got endpoints: latency-svc-b5h5l [1.788806631s] Dec 15 13:44:11.762: INFO: Created: latency-svc-4sd4l Dec 15 13:44:11.876: INFO: Got endpoints: latency-svc-4sd4l [1.828401978s] Dec 15 13:44:11.889: INFO: Created: latency-svc-4996r Dec 15 13:44:11.897: INFO: Got endpoints: latency-svc-4996r [1.849509691s] Dec 15 13:44:11.962: INFO: Created: latency-svc-w86zl Dec 15 13:44:11.968: INFO: Got endpoints: latency-svc-w86zl [1.871213009s] Dec 15 13:44:12.077: INFO: Created: latency-svc-cvvk5 Dec 15 13:44:12.083: INFO: Got endpoints: latency-svc-cvvk5 [1.816372799s] Dec 15 13:44:12.146: INFO: Created: latency-svc-l748n Dec 15 13:44:12.227: INFO: Got endpoints: latency-svc-l748n [1.904021978s] Dec 15 13:44:12.231: INFO: Created: latency-svc-945z7 Dec 15 13:44:12.236: INFO: Got endpoints: latency-svc-945z7 [1.76541962s] Dec 15 13:44:12.310: INFO: Created: latency-svc-xmqq2 Dec 15 13:44:12.424: INFO: Got endpoints: latency-svc-xmqq2 [1.927970293s] Dec 15 13:44:12.442: INFO: Created: latency-svc-stwj6 Dec 15 13:44:12.448: INFO: Got endpoints: latency-svc-stwj6 [1.759347418s] Dec 15 13:44:12.873: INFO: Created: latency-svc-9nfg4 Dec 15 13:44:12.956: INFO: Got endpoints: latency-svc-9nfg4 [2.24878242s] Dec 15 13:44:12.973: INFO: Created: latency-svc-lvwwt Dec 15 13:44:13.044: INFO: Got endpoints: latency-svc-lvwwt [2.094401621s] Dec 15 13:44:13.101: INFO: Created: latency-svc-4m6r2 Dec 15 13:44:13.127: INFO: Got endpoints: latency-svc-4m6r2 [1.99933148s] Dec 15 13:44:13.215: INFO: Created: latency-svc-wq274 Dec 15 13:44:13.252: INFO: Got endpoints: latency-svc-wq274 [2.099325555s] Dec 15 13:44:13.262: INFO: Created: latency-svc-gpmt4 Dec 15 13:44:13.293: INFO: Got endpoints: latency-svc-gpmt4 [2.10053128s] Dec 15 13:44:13.462: INFO: Created: latency-svc-gjr7l Dec 15 13:44:13.463: INFO: Got endpoints: latency-svc-gjr7l [2.057792112s] Dec 15 13:44:13.516: INFO: Created: latency-svc-fx7bm Dec 15 13:44:13.520: INFO: Got endpoints: latency-svc-fx7bm [1.847128328s] Dec 15 13:44:13.734: INFO: Created: latency-svc-kzgdw Dec 15 13:44:13.895: INFO: Created: latency-svc-5g42g Dec 15 13:44:13.903: INFO: Got endpoints: latency-svc-kzgdw [2.025879459s] Dec 15 13:44:13.915: INFO: Got endpoints: latency-svc-5g42g [2.017841739s] Dec 15 13:44:13.961: INFO: Created: latency-svc-gkv46 Dec 15 13:44:14.088: INFO: Got endpoints: latency-svc-gkv46 [2.119322816s] Dec 15 13:44:14.103: INFO: Created: latency-svc-cz7s9 Dec 15 13:44:14.104: INFO: Got endpoints: latency-svc-cz7s9 [2.020279077s] Dec 15 13:44:14.169: INFO: Created: latency-svc-bclv4 Dec 15 13:44:14.266: INFO: Got endpoints: latency-svc-bclv4 [2.038436437s] Dec 15 13:44:14.289: INFO: Created: latency-svc-zmqq8 Dec 15 13:44:14.303: INFO: Got endpoints: latency-svc-zmqq8 [2.066885655s] Dec 15 13:44:14.474: INFO: Created: latency-svc-tfcrg Dec 15 13:44:14.477: INFO: Got endpoints: latency-svc-tfcrg [2.052838609s] Dec 15 13:44:14.566: INFO: Created: latency-svc-2v6nc Dec 15 13:44:14.656: INFO: Got endpoints: latency-svc-2v6nc [2.207487584s] Dec 15 13:44:14.691: INFO: Created: latency-svc-wgngh Dec 15 13:44:14.694: INFO: Got endpoints: latency-svc-wgngh [1.737397779s] Dec 15 13:44:14.733: INFO: Created: latency-svc-btv7z Dec 15 13:44:14.817: INFO: Got endpoints: latency-svc-btv7z [1.772439255s] Dec 15 13:44:14.861: INFO: Created: latency-svc-2jk8c Dec 15 13:44:14.997: INFO: Got endpoints: latency-svc-2jk8c [1.868623264s] Dec 15 13:44:15.007: INFO: Created: latency-svc-fz9z5 Dec 15 13:44:15.009: INFO: Got endpoints: latency-svc-fz9z5 [1.755945211s] Dec 15 13:44:15.183: INFO: Created: latency-svc-54xfm Dec 15 13:44:15.195: INFO: Got endpoints: latency-svc-54xfm [1.900745216s] Dec 15 13:44:15.198: INFO: Created: latency-svc-t55vv Dec 15 13:44:15.227: INFO: Got endpoints: latency-svc-t55vv [1.763804782s] Dec 15 13:44:15.265: INFO: Created: latency-svc-7kmnj Dec 15 13:44:15.374: INFO: Got endpoints: latency-svc-7kmnj [1.854075726s] Dec 15 13:44:15.377: INFO: Created: latency-svc-724vp Dec 15 13:44:15.380: INFO: Got endpoints: latency-svc-724vp [1.476825053s] Dec 15 13:44:15.440: INFO: Created: latency-svc-xvxqz Dec 15 13:44:15.572: INFO: Got endpoints: latency-svc-xvxqz [1.657460439s] Dec 15 13:44:15.595: INFO: Created: latency-svc-nx8qm Dec 15 13:44:15.600: INFO: Got endpoints: latency-svc-nx8qm [1.511825779s] Dec 15 13:44:15.661: INFO: Created: latency-svc-hp6jv Dec 15 13:44:15.759: INFO: Got endpoints: latency-svc-hp6jv [1.655037017s] Dec 15 13:44:15.773: INFO: Created: latency-svc-jczts Dec 15 13:44:15.788: INFO: Got endpoints: latency-svc-jczts [1.520512573s] Dec 15 13:44:15.861: INFO: Created: latency-svc-5bf2h Dec 15 13:44:15.987: INFO: Got endpoints: latency-svc-5bf2h [1.68325412s] Dec 15 13:44:16.015: INFO: Created: latency-svc-n8kzz Dec 15 13:44:16.026: INFO: Got endpoints: latency-svc-n8kzz [1.548258789s] Dec 15 13:44:16.093: INFO: Created: latency-svc-svqt8 Dec 15 13:44:16.178: INFO: Got endpoints: latency-svc-svqt8 [1.521364687s] Dec 15 13:44:16.246: INFO: Created: latency-svc-tvwm5 Dec 15 13:44:16.246: INFO: Got endpoints: latency-svc-tvwm5 [1.551857795s] Dec 15 13:44:16.292: INFO: Created: latency-svc-n2bbx Dec 15 13:44:16.364: INFO: Got endpoints: latency-svc-n2bbx [1.547124131s] Dec 15 13:44:16.399: INFO: Created: latency-svc-m2mcw Dec 15 13:44:16.399: INFO: Got endpoints: latency-svc-m2mcw [1.401917171s] Dec 15 13:44:16.451: INFO: Created: latency-svc-hgrtd Dec 15 13:44:16.458: INFO: Got endpoints: latency-svc-hgrtd [1.449153846s] Dec 15 13:44:16.580: INFO: Created: latency-svc-k6t2v Dec 15 13:44:16.585: INFO: Got endpoints: latency-svc-k6t2v [1.389391342s] Dec 15 13:44:16.630: INFO: Created: latency-svc-6tcxp Dec 15 13:44:16.718: INFO: Got endpoints: latency-svc-6tcxp [1.490152912s] Dec 15 13:44:16.796: INFO: Created: latency-svc-st8nn Dec 15 13:44:16.806: INFO: Got endpoints: latency-svc-st8nn [1.425968976s] Dec 15 13:44:16.913: INFO: Created: latency-svc-zrjp4 Dec 15 13:44:16.923: INFO: Got endpoints: latency-svc-zrjp4 [1.547724398s] Dec 15 13:44:16.956: INFO: Created: latency-svc-7sdpv Dec 15 13:44:16.965: INFO: Got endpoints: latency-svc-7sdpv [1.39110076s] Dec 15 13:44:17.073: INFO: Created: latency-svc-kjwb8 Dec 15 13:44:17.120: INFO: Got endpoints: latency-svc-kjwb8 [1.519231302s] Dec 15 13:44:17.229: INFO: Created: latency-svc-vk8xp Dec 15 13:44:17.247: INFO: Got endpoints: latency-svc-vk8xp [1.487394403s] Dec 15 13:44:17.283: INFO: Created: latency-svc-54j9w Dec 15 13:44:17.289: INFO: Got endpoints: latency-svc-54j9w [1.500927002s] Dec 15 13:44:17.494: INFO: Created: latency-svc-vkj7k Dec 15 13:44:17.501: INFO: Got endpoints: latency-svc-vkj7k [1.51392954s] Dec 15 13:44:17.552: INFO: Created: latency-svc-4xjhz Dec 15 13:44:17.552: INFO: Got endpoints: latency-svc-4xjhz [1.526724556s] Dec 15 13:44:18.174: INFO: Created: latency-svc-ld6gr Dec 15 13:44:18.174: INFO: Got endpoints: latency-svc-ld6gr [1.996167027s] Dec 15 13:44:18.257: INFO: Created: latency-svc-kcjbj Dec 15 13:44:18.320: INFO: Got endpoints: latency-svc-kcjbj [2.074034147s] Dec 15 13:44:18.359: INFO: Created: latency-svc-rxx42 Dec 15 13:44:18.412: INFO: Got endpoints: latency-svc-rxx42 [2.047194047s] Dec 15 13:44:18.416: INFO: Created: latency-svc-jghl5 Dec 15 13:44:18.495: INFO: Got endpoints: latency-svc-jghl5 [2.095343292s] Dec 15 13:44:18.506: INFO: Created: latency-svc-2wf7z Dec 15 13:44:18.508: INFO: Got endpoints: latency-svc-2wf7z [2.049702398s] Dec 15 13:44:18.550: INFO: Created: latency-svc-4p7l5 Dec 15 13:44:18.557: INFO: Got endpoints: latency-svc-4p7l5 [1.971823068s] Dec 15 13:44:18.676: INFO: Created: latency-svc-64h6w Dec 15 13:44:18.691: INFO: Got endpoints: latency-svc-64h6w [1.972743494s] Dec 15 13:44:18.747: INFO: Created: latency-svc-kpnkx Dec 15 13:44:18.830: INFO: Got endpoints: latency-svc-kpnkx [2.023816685s] Dec 15 13:44:18.953: INFO: Created: latency-svc-pp7nl Dec 15 13:44:18.965: INFO: Got endpoints: latency-svc-pp7nl [2.042692285s] Dec 15 13:44:19.169: INFO: Created: latency-svc-2vd2d Dec 15 13:44:19.210: INFO: Got endpoints: latency-svc-2vd2d [2.24543778s] Dec 15 13:44:19.212: INFO: Created: latency-svc-7479t Dec 15 13:44:19.247: INFO: Got endpoints: latency-svc-7479t [2.126521735s] Dec 15 13:44:19.322: INFO: Created: latency-svc-d4kzp Dec 15 13:44:19.338: INFO: Got endpoints: latency-svc-d4kzp [2.090953038s] Dec 15 13:44:19.394: INFO: Created: latency-svc-thsgv Dec 15 13:44:19.405: INFO: Got endpoints: latency-svc-thsgv [2.115399928s] Dec 15 13:44:19.548: INFO: Created: latency-svc-lrfpp Dec 15 13:44:19.555: INFO: Got endpoints: latency-svc-lrfpp [2.053385535s] Dec 15 13:44:19.555: INFO: Latencies: [262.583787ms 297.145984ms 447.225945ms 505.207308ms 727.784793ms 863.432959ms 909.186921ms 1.024984579s 1.119946953s 1.244440123s 1.297179529s 1.310017297s 1.324787612s 1.345542078s 1.35175345s 1.353960166s 1.355974346s 1.369843144s 1.384483413s 1.389391342s 1.39110076s 1.399086609s 1.401917171s 1.408307524s 1.408503068s 1.408750441s 1.4155323s 1.415952278s 1.417616473s 1.419006513s 1.419932558s 1.424487023s 1.425968976s 1.430265008s 1.433207955s 1.43847609s 1.438826674s 1.443326705s 1.446355144s 1.449153846s 1.455820175s 1.460186124s 1.466449074s 1.468197966s 1.470220732s 1.475020384s 1.476057424s 1.476825053s 1.477063022s 1.479663372s 1.481085859s 1.482782534s 1.487394403s 1.490152912s 1.490735142s 1.491191781s 1.494064356s 1.4954628s 1.496482318s 1.500927002s 1.504214628s 1.505211795s 1.509126576s 1.511825779s 1.512504771s 1.512545921s 1.51392954s 1.519231302s 1.520512573s 1.521039337s 1.521197483s 1.521364687s 1.52353379s 1.523888853s 1.525545871s 1.526724556s 1.527082748s 1.527648606s 1.530443888s 1.544081188s 1.544224852s 1.54681764s 1.547124131s 1.547724398s 1.548258789s 1.551857795s 1.561378834s 1.577273405s 1.58402332s 1.58569645s 1.591624295s 1.59389221s 1.603801347s 1.604189115s 1.605032404s 1.605301105s 1.60814033s 1.614460513s 1.623717273s 1.624598226s 1.625877147s 1.629625543s 1.640334352s 1.647875521s 1.650807909s 1.654628951s 1.655037017s 1.657460439s 1.662846483s 1.665267297s 1.669802991s 1.673559957s 1.676202788s 1.678549702s 1.68325412s 1.685613717s 1.692645258s 1.694274119s 1.697097112s 1.715073111s 1.719530125s 1.727742862s 1.729131074s 1.732768335s 1.737397779s 1.74083005s 1.755945211s 1.759347418s 1.759436221s 1.760557182s 1.763804782s 1.76541962s 1.766468848s 1.767451212s 1.769705023s 1.772439255s 1.779415652s 1.780400398s 1.782459383s 1.788806631s 1.809158218s 1.810475974s 1.812926288s 1.813505792s 1.816372799s 1.817153071s 1.819442219s 1.828401978s 1.841323909s 1.847128328s 1.849509691s 1.854075726s 1.857751544s 1.868623264s 1.871213009s 1.881067479s 1.883407274s 1.900745216s 1.904021978s 1.912098658s 1.923651589s 1.927970293s 1.93944939s 1.942670306s 1.955097855s 1.968566185s 1.971823068s 1.972743494s 1.979283771s 1.996167027s 1.99933148s 2.008152081s 2.017841739s 2.020279077s 2.023816685s 2.025879459s 2.027998274s 2.038436437s 2.042692285s 2.047194047s 2.049702398s 2.052838609s 2.053385535s 2.057792112s 2.066885655s 2.0713722s 2.074034147s 2.075237087s 2.082014108s 2.090953038s 2.094401621s 2.095343292s 2.099325555s 2.10053128s 2.115399928s 2.119322816s 2.126521735s 2.207487584s 2.24543778s 2.24878242s] Dec 15 13:44:19.556: INFO: 50 %ile: 1.625877147s Dec 15 13:44:19.556: INFO: 90 %ile: 2.049702398s Dec 15 13:44:19.556: INFO: 99 %ile: 2.24543778s Dec 15 13:44:19.556: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:44:19.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-7344" for this suite. Dec 15 13:45:07.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:45:07.775: INFO: namespace svc-latency-7344 deletion completed in 48.208395166s • [SLOW TEST:88.116 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:45:07.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-db033ed4-c94d-4c5a-83e9-e50b64e5c8ad STEP: Creating a pod to test consume secrets Dec 15 13:45:07.921: INFO: Waiting up to 5m0s for pod "pod-secrets-7c8f66db-eb9b-4cb4-9f00-e9346aa2cadf" in namespace "secrets-8035" to be "success or failure" Dec 15 13:45:07.927: INFO: Pod "pod-secrets-7c8f66db-eb9b-4cb4-9f00-e9346aa2cadf": Phase="Pending", Reason="", readiness=false. Elapsed: 5.980242ms Dec 15 13:45:09.938: INFO: Pod "pod-secrets-7c8f66db-eb9b-4cb4-9f00-e9346aa2cadf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01634309s Dec 15 13:45:11.949: INFO: Pod "pod-secrets-7c8f66db-eb9b-4cb4-9f00-e9346aa2cadf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027455109s Dec 15 13:45:13.978: INFO: Pod "pod-secrets-7c8f66db-eb9b-4cb4-9f00-e9346aa2cadf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056633349s Dec 15 13:45:15.987: INFO: Pod "pod-secrets-7c8f66db-eb9b-4cb4-9f00-e9346aa2cadf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065493686s Dec 15 13:45:17.994: INFO: Pod "pod-secrets-7c8f66db-eb9b-4cb4-9f00-e9346aa2cadf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.072506869s STEP: Saw pod success Dec 15 13:45:17.994: INFO: Pod "pod-secrets-7c8f66db-eb9b-4cb4-9f00-e9346aa2cadf" satisfied condition "success or failure" Dec 15 13:45:17.998: INFO: Trying to get logs from node iruya-node pod pod-secrets-7c8f66db-eb9b-4cb4-9f00-e9346aa2cadf container secret-volume-test: STEP: delete the pod Dec 15 13:45:18.224: INFO: Waiting for pod pod-secrets-7c8f66db-eb9b-4cb4-9f00-e9346aa2cadf to disappear Dec 15 13:45:18.249: INFO: Pod pod-secrets-7c8f66db-eb9b-4cb4-9f00-e9346aa2cadf no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:45:18.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8035" for this suite. Dec 15 13:45:24.333: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:45:24.525: INFO: namespace secrets-8035 deletion completed in 6.262348553s • [SLOW TEST:16.749 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:45:24.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Dec 15 13:45:24.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6536' Dec 15 13:45:27.480: INFO: stderr: "" Dec 15 13:45:27.481: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 15 13:45:27.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6536' Dec 15 13:45:27.713: INFO: stderr: "" Dec 15 13:45:27.713: INFO: stdout: "update-demo-nautilus-w9wjd update-demo-nautilus-xjszq " Dec 15 13:45:27.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w9wjd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6536' Dec 15 13:45:27.837: INFO: stderr: "" Dec 15 13:45:27.838: INFO: stdout: "" Dec 15 13:45:27.838: INFO: update-demo-nautilus-w9wjd is created but not running Dec 15 13:45:32.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6536' Dec 15 13:45:33.069: INFO: stderr: "" Dec 15 13:45:33.070: INFO: stdout: "update-demo-nautilus-w9wjd update-demo-nautilus-xjszq " Dec 15 13:45:33.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w9wjd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6536' Dec 15 13:45:33.730: INFO: stderr: "" Dec 15 13:45:33.731: INFO: stdout: "" Dec 15 13:45:33.731: INFO: update-demo-nautilus-w9wjd is created but not running Dec 15 13:45:38.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6536' Dec 15 13:45:38.933: INFO: stderr: "" Dec 15 13:45:38.933: INFO: stdout: "update-demo-nautilus-w9wjd update-demo-nautilus-xjszq " Dec 15 13:45:38.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w9wjd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6536' Dec 15 13:45:39.035: INFO: stderr: "" Dec 15 13:45:39.035: INFO: stdout: "true" Dec 15 13:45:39.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w9wjd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6536' Dec 15 13:45:39.152: INFO: stderr: "" Dec 15 13:45:39.152: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 15 13:45:39.152: INFO: validating pod update-demo-nautilus-w9wjd Dec 15 13:45:39.172: INFO: got data: { "image": "nautilus.jpg" } Dec 15 13:45:39.172: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 15 13:45:39.172: INFO: update-demo-nautilus-w9wjd is verified up and running Dec 15 13:45:39.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xjszq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6536' Dec 15 13:45:39.285: INFO: stderr: "" Dec 15 13:45:39.286: INFO: stdout: "true" Dec 15 13:45:39.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xjszq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6536' Dec 15 13:45:39.497: INFO: stderr: "" Dec 15 13:45:39.497: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 15 13:45:39.497: INFO: validating pod update-demo-nautilus-xjszq Dec 15 13:45:39.507: INFO: got data: { "image": "nautilus.jpg" } Dec 15 13:45:39.507: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 15 13:45:39.507: INFO: update-demo-nautilus-xjszq is verified up and running STEP: scaling down the replication controller Dec 15 13:45:39.510: INFO: scanned /root for discovery docs: Dec 15 13:45:39.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-6536' Dec 15 13:45:40.699: INFO: stderr: "" Dec 15 13:45:40.699: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 15 13:45:40.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6536' Dec 15 13:45:40.810: INFO: stderr: "" Dec 15 13:45:40.810: INFO: stdout: "update-demo-nautilus-w9wjd update-demo-nautilus-xjszq " STEP: Replicas for name=update-demo: expected=1 actual=2 Dec 15 13:45:45.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6536' Dec 15 13:45:45.987: INFO: stderr: "" Dec 15 13:45:45.988: INFO: stdout: "update-demo-nautilus-w9wjd update-demo-nautilus-xjszq " STEP: Replicas for name=update-demo: expected=1 actual=2 Dec 15 13:45:50.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6536' Dec 15 13:45:51.101: INFO: stderr: "" Dec 15 13:45:51.102: INFO: stdout: "update-demo-nautilus-w9wjd update-demo-nautilus-xjszq " STEP: Replicas for name=update-demo: expected=1 actual=2 Dec 15 13:45:56.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6536' Dec 15 13:45:56.264: INFO: stderr: "" Dec 15 13:45:56.264: INFO: stdout: "update-demo-nautilus-w9wjd update-demo-nautilus-xjszq " STEP: Replicas for name=update-demo: expected=1 actual=2 Dec 15 13:46:01.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6536' Dec 15 13:46:01.508: INFO: stderr: "" Dec 15 13:46:01.508: INFO: stdout: "update-demo-nautilus-xjszq " Dec 15 13:46:01.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xjszq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6536' Dec 15 13:46:01.628: INFO: stderr: "" Dec 15 13:46:01.628: INFO: stdout: "true" Dec 15 13:46:01.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xjszq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6536' Dec 15 13:46:01.760: INFO: stderr: "" Dec 15 13:46:01.760: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 15 13:46:01.760: INFO: validating pod update-demo-nautilus-xjszq Dec 15 13:46:01.768: INFO: got data: { "image": "nautilus.jpg" } Dec 15 13:46:01.768: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 15 13:46:01.768: INFO: update-demo-nautilus-xjszq is verified up and running STEP: scaling up the replication controller Dec 15 13:46:01.770: INFO: scanned /root for discovery docs: Dec 15 13:46:01.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-6536' Dec 15 13:46:02.939: INFO: stderr: "" Dec 15 13:46:02.939: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 15 13:46:02.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6536' Dec 15 13:46:03.102: INFO: stderr: "" Dec 15 13:46:03.102: INFO: stdout: "update-demo-nautilus-ql2tt update-demo-nautilus-xjszq " Dec 15 13:46:03.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ql2tt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6536' Dec 15 13:46:03.199: INFO: stderr: "" Dec 15 13:46:03.199: INFO: stdout: "" Dec 15 13:46:03.199: INFO: update-demo-nautilus-ql2tt is created but not running Dec 15 13:46:08.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6536' Dec 15 13:46:08.348: INFO: stderr: "" Dec 15 13:46:08.349: INFO: stdout: "update-demo-nautilus-ql2tt update-demo-nautilus-xjszq " Dec 15 13:46:08.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ql2tt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6536' Dec 15 13:46:08.602: INFO: stderr: "" Dec 15 13:46:08.603: INFO: stdout: "" Dec 15 13:46:08.603: INFO: update-demo-nautilus-ql2tt is created but not running Dec 15 13:46:13.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6536' Dec 15 13:46:13.726: INFO: stderr: "" Dec 15 13:46:13.726: INFO: stdout: "update-demo-nautilus-ql2tt update-demo-nautilus-xjszq " Dec 15 13:46:13.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ql2tt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6536' Dec 15 13:46:14.145: INFO: stderr: "" Dec 15 13:46:14.145: INFO: stdout: "true" Dec 15 13:46:14.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ql2tt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6536' Dec 15 13:46:14.286: INFO: stderr: "" Dec 15 13:46:14.286: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 15 13:46:14.286: INFO: validating pod update-demo-nautilus-ql2tt Dec 15 13:46:14.308: INFO: got data: { "image": "nautilus.jpg" } Dec 15 13:46:14.308: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 15 13:46:14.308: INFO: update-demo-nautilus-ql2tt is verified up and running Dec 15 13:46:14.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xjszq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6536' Dec 15 13:46:14.465: INFO: stderr: "" Dec 15 13:46:14.465: INFO: stdout: "true" Dec 15 13:46:14.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xjszq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6536' Dec 15 13:46:14.607: INFO: stderr: "" Dec 15 13:46:14.607: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 15 13:46:14.607: INFO: validating pod update-demo-nautilus-xjszq Dec 15 13:46:14.622: INFO: got data: { "image": "nautilus.jpg" } Dec 15 13:46:14.622: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 15 13:46:14.622: INFO: update-demo-nautilus-xjszq is verified up and running STEP: using delete to clean up resources Dec 15 13:46:14.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6536' Dec 15 13:46:14.831: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 15 13:46:14.831: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Dec 15 13:46:14.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6536' Dec 15 13:46:15.062: INFO: stderr: "No resources found.\n" Dec 15 13:46:15.062: INFO: stdout: "" Dec 15 13:46:15.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6536 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 15 13:46:15.375: INFO: stderr: "" Dec 15 13:46:15.376: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:46:15.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6536" for this suite. Dec 15 13:46:35.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:46:35.651: INFO: namespace kubectl-6536 deletion completed in 20.260785463s • [SLOW TEST:71.123 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:46:35.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-9b1cf43b-61b9-49bc-999d-44be7817f3dc STEP: Creating a pod to test consume configMaps Dec 15 13:46:35.810: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-61dfda91-f8fd-4b44-9499-f6f41b17212b" in namespace "projected-6356" to be "success or failure" Dec 15 13:46:35.842: INFO: Pod "pod-projected-configmaps-61dfda91-f8fd-4b44-9499-f6f41b17212b": Phase="Pending", Reason="", readiness=false. Elapsed: 32.508978ms Dec 15 13:46:37.881: INFO: Pod "pod-projected-configmaps-61dfda91-f8fd-4b44-9499-f6f41b17212b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070693346s Dec 15 13:46:39.897: INFO: Pod "pod-projected-configmaps-61dfda91-f8fd-4b44-9499-f6f41b17212b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086764641s Dec 15 13:46:41.914: INFO: Pod "pod-projected-configmaps-61dfda91-f8fd-4b44-9499-f6f41b17212b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.103868435s Dec 15 13:46:43.925: INFO: Pod "pod-projected-configmaps-61dfda91-f8fd-4b44-9499-f6f41b17212b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.114723801s Dec 15 13:46:45.934: INFO: Pod "pod-projected-configmaps-61dfda91-f8fd-4b44-9499-f6f41b17212b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.124028262s STEP: Saw pod success Dec 15 13:46:45.934: INFO: Pod "pod-projected-configmaps-61dfda91-f8fd-4b44-9499-f6f41b17212b" satisfied condition "success or failure" Dec 15 13:46:45.938: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-61dfda91-f8fd-4b44-9499-f6f41b17212b container projected-configmap-volume-test: STEP: delete the pod Dec 15 13:46:46.015: INFO: Waiting for pod pod-projected-configmaps-61dfda91-f8fd-4b44-9499-f6f41b17212b to disappear Dec 15 13:46:46.072: INFO: Pod pod-projected-configmaps-61dfda91-f8fd-4b44-9499-f6f41b17212b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:46:46.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6356" for this suite. Dec 15 13:46:52.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:46:52.264: INFO: namespace projected-6356 deletion completed in 6.183529418s • [SLOW TEST:16.612 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:46:52.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-bcnpf in namespace proxy-8145 I1215 13:46:52.620979 8 runners.go:180] Created replication controller with name: proxy-service-bcnpf, namespace: proxy-8145, replica count: 1 I1215 13:46:53.672456 8 runners.go:180] proxy-service-bcnpf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1215 13:46:54.672930 8 runners.go:180] proxy-service-bcnpf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1215 13:46:55.673851 8 runners.go:180] proxy-service-bcnpf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1215 13:46:56.675191 8 runners.go:180] proxy-service-bcnpf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1215 13:46:57.676007 8 runners.go:180] proxy-service-bcnpf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1215 13:46:58.676645 8 runners.go:180] proxy-service-bcnpf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1215 13:46:59.677253 8 runners.go:180] proxy-service-bcnpf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1215 13:47:00.677838 8 runners.go:180] proxy-service-bcnpf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1215 13:47:01.678336 8 runners.go:180] proxy-service-bcnpf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1215 13:47:02.678890 8 runners.go:180] proxy-service-bcnpf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1215 13:47:03.679549 8 runners.go:180] proxy-service-bcnpf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1215 13:47:04.680518 8 runners.go:180] proxy-service-bcnpf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1215 13:47:05.681147 8 runners.go:180] proxy-service-bcnpf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1215 13:47:06.681804 8 runners.go:180] proxy-service-bcnpf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1215 13:47:07.682696 8 runners.go:180] proxy-service-bcnpf Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 15 13:47:07.692: INFO: setup took 15.336052947s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Dec 15 13:47:07.729: INFO: (0) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:162/proxy/: bar (200; 36.603891ms) Dec 15 13:47:07.729: INFO: (0) /api/v1/namespaces/proxy-8145/services/http:proxy-service-bcnpf:portname1/proxy/: foo (200; 36.93401ms) Dec 15 13:47:07.729: INFO: (0) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw:1080/proxy/: test<... (200; 36.703496ms) Dec 15 13:47:07.729: INFO: (0) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:1080/proxy/: ... (200; 36.542144ms) Dec 15 13:47:07.729: INFO: (0) /api/v1/namespaces/proxy-8145/services/proxy-service-bcnpf:portname2/proxy/: bar (200; 36.449363ms) Dec 15 13:47:07.730: INFO: (0) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw/proxy/: test (200; 36.752555ms) Dec 15 13:47:07.729: INFO: (0) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw:162/proxy/: bar (200; 36.475183ms) Dec 15 13:47:07.729: INFO: (0) /api/v1/namespaces/proxy-8145/services/proxy-service-bcnpf:portname1/proxy/: foo (200; 36.49008ms) Dec 15 13:47:07.730: INFO: (0) /api/v1/namespaces/proxy-8145/services/http:proxy-service-bcnpf:portname2/proxy/: bar (200; 36.880051ms) Dec 15 13:47:07.730: INFO: (0) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw:160/proxy/: foo (200; 36.933892ms) Dec 15 13:47:07.731: INFO: (0) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:160/proxy/: foo (200; 37.933395ms) Dec 15 13:47:07.738: INFO: (0) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:462/proxy/: tls qux (200; 45.741969ms) Dec 15 13:47:07.739: INFO: (0) /api/v1/namespaces/proxy-8145/services/https:proxy-service-bcnpf:tlsportname2/proxy/: tls qux (200; 46.606984ms) Dec 15 13:47:07.739: INFO: (0) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:443/proxy/: ... (200; 9.277593ms) Dec 15 13:47:07.755: INFO: (1) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw:160/proxy/: foo (200; 9.497285ms) Dec 15 13:47:07.755: INFO: (1) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:460/proxy/: tls baz (200; 9.614636ms) Dec 15 13:47:07.755: INFO: (1) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:160/proxy/: foo (200; 9.975302ms) Dec 15 13:47:07.755: INFO: (1) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:443/proxy/: test (200; 9.354097ms) Dec 15 13:47:07.755: INFO: (1) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw:162/proxy/: bar (200; 9.662518ms) Dec 15 13:47:07.755: INFO: (1) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:162/proxy/: bar (200; 9.676729ms) Dec 15 13:47:07.755: INFO: (1) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw:1080/proxy/: test<... (200; 9.682758ms) Dec 15 13:47:07.759: INFO: (1) /api/v1/namespaces/proxy-8145/services/https:proxy-service-bcnpf:tlsportname1/proxy/: tls baz (200; 14.005969ms) Dec 15 13:47:07.760: INFO: (1) /api/v1/namespaces/proxy-8145/services/proxy-service-bcnpf:portname1/proxy/: foo (200; 13.89369ms) Dec 15 13:47:07.760: INFO: (1) /api/v1/namespaces/proxy-8145/services/http:proxy-service-bcnpf:portname2/proxy/: bar (200; 14.073317ms) Dec 15 13:47:07.760: INFO: (1) /api/v1/namespaces/proxy-8145/services/http:proxy-service-bcnpf:portname1/proxy/: foo (200; 14.020005ms) Dec 15 13:47:07.760: INFO: (1) /api/v1/namespaces/proxy-8145/services/https:proxy-service-bcnpf:tlsportname2/proxy/: tls qux (200; 14.161005ms) Dec 15 13:47:07.762: INFO: (1) /api/v1/namespaces/proxy-8145/services/proxy-service-bcnpf:portname2/proxy/: bar (200; 16.496008ms) Dec 15 13:47:07.778: INFO: (2) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:160/proxy/: foo (200; 15.925308ms) Dec 15 13:47:07.778: INFO: (2) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw:160/proxy/: foo (200; 15.872482ms) Dec 15 13:47:07.778: INFO: (2) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw:1080/proxy/: test<... (200; 15.999359ms) Dec 15 13:47:07.778: INFO: (2) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:162/proxy/: bar (200; 15.964416ms) Dec 15 13:47:07.778: INFO: (2) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw/proxy/: test (200; 16.311157ms) Dec 15 13:47:07.778: INFO: (2) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:443/proxy/: ... (200; 23.871883ms) Dec 15 13:47:07.786: INFO: (2) /api/v1/namespaces/proxy-8145/services/https:proxy-service-bcnpf:tlsportname2/proxy/: tls qux (200; 24.058948ms) Dec 15 13:47:07.786: INFO: (2) /api/v1/namespaces/proxy-8145/services/proxy-service-bcnpf:portname1/proxy/: foo (200; 23.93337ms) Dec 15 13:47:07.786: INFO: (2) /api/v1/namespaces/proxy-8145/services/https:proxy-service-bcnpf:tlsportname1/proxy/: tls baz (200; 24.188638ms) Dec 15 13:47:07.786: INFO: (2) /api/v1/namespaces/proxy-8145/services/http:proxy-service-bcnpf:portname1/proxy/: foo (200; 24.283405ms) Dec 15 13:47:07.787: INFO: (2) /api/v1/namespaces/proxy-8145/services/proxy-service-bcnpf:portname2/proxy/: bar (200; 24.694367ms) Dec 15 13:47:07.787: INFO: (2) /api/v1/namespaces/proxy-8145/services/http:proxy-service-bcnpf:portname2/proxy/: bar (200; 25.150821ms) Dec 15 13:47:07.801: INFO: (3) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:162/proxy/: bar (200; 13.520392ms) Dec 15 13:47:07.801: INFO: (3) /api/v1/namespaces/proxy-8145/services/proxy-service-bcnpf:portname1/proxy/: foo (200; 14.114205ms) Dec 15 13:47:07.801: INFO: (3) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:160/proxy/: foo (200; 13.845632ms) Dec 15 13:47:07.802: INFO: (3) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw:1080/proxy/: test<... (200; 14.312471ms) Dec 15 13:47:07.823: INFO: (3) /api/v1/namespaces/proxy-8145/services/https:proxy-service-bcnpf:tlsportname1/proxy/: tls baz (200; 35.667287ms) Dec 15 13:47:07.823: INFO: (3) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:460/proxy/: tls baz (200; 35.882197ms) Dec 15 13:47:07.823: INFO: (3) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw:160/proxy/: foo (200; 35.646626ms) Dec 15 13:47:07.823: INFO: (3) /api/v1/namespaces/proxy-8145/services/proxy-service-bcnpf:portname2/proxy/: bar (200; 35.938454ms) Dec 15 13:47:07.824: INFO: (3) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw:162/proxy/: bar (200; 36.346117ms) Dec 15 13:47:07.824: INFO: (3) /api/v1/namespaces/proxy-8145/services/https:proxy-service-bcnpf:tlsportname2/proxy/: tls qux (200; 36.242819ms) Dec 15 13:47:07.824: INFO: (3) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:462/proxy/: tls qux (200; 36.470762ms) Dec 15 13:47:07.825: INFO: (3) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:1080/proxy/: ... (200; 37.639685ms) Dec 15 13:47:07.825: INFO: (3) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:443/proxy/: test (200; 39.622415ms) Dec 15 13:47:07.827: INFO: (3) /api/v1/namespaces/proxy-8145/services/http:proxy-service-bcnpf:portname1/proxy/: foo (200; 39.916053ms) Dec 15 13:47:07.842: INFO: (4) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:1080/proxy/: ... (200; 13.20073ms) Dec 15 13:47:07.842: INFO: (4) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:160/proxy/: foo (200; 13.346956ms) Dec 15 13:47:07.842: INFO: (4) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw:162/proxy/: bar (200; 14.894534ms) Dec 15 13:47:07.843: INFO: (4) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw:1080/proxy/: test<... (200; 13.649152ms) Dec 15 13:47:07.843: INFO: (4) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw:160/proxy/: foo (200; 13.692057ms) Dec 15 13:47:07.843: INFO: (4) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:443/proxy/: test (200; 14.665592ms) Dec 15 13:47:07.843: INFO: (4) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:162/proxy/: bar (200; 15.340902ms) Dec 15 13:47:07.843: INFO: (4) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:462/proxy/: tls qux (200; 15.023178ms) Dec 15 13:47:07.844: INFO: (4) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:460/proxy/: tls baz (200; 14.58673ms) Dec 15 13:47:07.849: INFO: (4) /api/v1/namespaces/proxy-8145/services/proxy-service-bcnpf:portname1/proxy/: foo (200; 20.55749ms) Dec 15 13:47:07.850: INFO: (4) /api/v1/namespaces/proxy-8145/services/http:proxy-service-bcnpf:portname2/proxy/: bar (200; 22.102681ms) Dec 15 13:47:07.850: INFO: (4) /api/v1/namespaces/proxy-8145/services/proxy-service-bcnpf:portname2/proxy/: bar (200; 21.49641ms) Dec 15 13:47:07.851: INFO: (4) /api/v1/namespaces/proxy-8145/services/http:proxy-service-bcnpf:portname1/proxy/: foo (200; 22.855187ms) Dec 15 13:47:07.851: INFO: (4) /api/v1/namespaces/proxy-8145/services/https:proxy-service-bcnpf:tlsportname2/proxy/: tls qux (200; 22.714104ms) Dec 15 13:47:07.851: INFO: (4) /api/v1/namespaces/proxy-8145/services/https:proxy-service-bcnpf:tlsportname1/proxy/: tls baz (200; 22.399067ms) Dec 15 13:47:07.861: INFO: (5) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw:160/proxy/: foo (200; 9.432146ms) Dec 15 13:47:07.863: INFO: (5) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:160/proxy/: foo (200; 10.581914ms) Dec 15 13:47:07.863: INFO: (5) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw:162/proxy/: bar (200; 10.966965ms) Dec 15 13:47:07.864: INFO: (5) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:162/proxy/: bar (200; 12.660338ms) Dec 15 13:47:07.864: INFO: (5) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw:1080/proxy/: test<... (200; 12.23988ms) Dec 15 13:47:07.864: INFO: (5) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:460/proxy/: tls baz (200; 12.761884ms) Dec 15 13:47:07.865: INFO: (5) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:462/proxy/: tls qux (200; 12.877314ms) Dec 15 13:47:07.865: INFO: (5) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:1080/proxy/: ... (200; 12.736367ms) Dec 15 13:47:07.865: INFO: (5) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:443/proxy/: test (200; 14.820641ms) Dec 15 13:47:07.871: INFO: (5) /api/v1/namespaces/proxy-8145/services/proxy-service-bcnpf:portname1/proxy/: foo (200; 18.658632ms) Dec 15 13:47:07.871: INFO: (5) /api/v1/namespaces/proxy-8145/services/proxy-service-bcnpf:portname2/proxy/: bar (200; 18.794905ms) Dec 15 13:47:07.873: INFO: (5) /api/v1/namespaces/proxy-8145/services/https:proxy-service-bcnpf:tlsportname2/proxy/: tls qux (200; 20.511365ms) Dec 15 13:47:07.873: INFO: (5) /api/v1/namespaces/proxy-8145/services/http:proxy-service-bcnpf:portname1/proxy/: foo (200; 21.145351ms) Dec 15 13:47:07.874: INFO: (5) /api/v1/namespaces/proxy-8145/services/https:proxy-service-bcnpf:tlsportname1/proxy/: tls baz (200; 22.163274ms) Dec 15 13:47:07.874: INFO: (5) /api/v1/namespaces/proxy-8145/services/http:proxy-service-bcnpf:portname2/proxy/: bar (200; 22.103718ms) Dec 15 13:47:07.890: INFO: (6) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw:160/proxy/: foo (200; 15.170817ms) Dec 15 13:47:07.893: INFO: (6) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:160/proxy/: foo (200; 18.589975ms) Dec 15 13:47:07.893: INFO: (6) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw:1080/proxy/: test<... (200; 18.626261ms) Dec 15 13:47:07.894: INFO: (6) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:443/proxy/: test (200; 19.821172ms) Dec 15 13:47:07.895: INFO: (6) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:460/proxy/: tls baz (200; 20.093954ms) Dec 15 13:47:07.896: INFO: (6) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:1080/proxy/: ... (200; 20.713639ms) Dec 15 13:47:07.896: INFO: (6) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw:162/proxy/: bar (200; 21.137113ms) Dec 15 13:47:07.896: INFO: (6) /api/v1/namespaces/proxy-8145/services/http:proxy-service-bcnpf:portname2/proxy/: bar (200; 20.787764ms) Dec 15 13:47:07.896: INFO: (6) /api/v1/namespaces/proxy-8145/services/proxy-service-bcnpf:portname1/proxy/: foo (200; 21.659336ms) Dec 15 13:47:07.900: INFO: (6) /api/v1/namespaces/proxy-8145/services/proxy-service-bcnpf:portname2/proxy/: bar (200; 24.871233ms) Dec 15 13:47:07.904: INFO: (6) /api/v1/namespaces/proxy-8145/services/https:proxy-service-bcnpf:tlsportname2/proxy/: tls qux (200; 29.497811ms) Dec 15 13:47:07.905: INFO: (6) /api/v1/namespaces/proxy-8145/services/https:proxy-service-bcnpf:tlsportname1/proxy/: tls baz (200; 29.929637ms) Dec 15 13:47:07.911: INFO: (6) /api/v1/namespaces/proxy-8145/services/http:proxy-service-bcnpf:portname1/proxy/: foo (200; 35.898666ms) Dec 15 13:47:07.932: INFO: (7) /api/v1/namespaces/proxy-8145/services/http:proxy-service-bcnpf:portname1/proxy/: foo (200; 20.669082ms) Dec 15 13:47:07.934: INFO: (7) /api/v1/namespaces/proxy-8145/services/proxy-service-bcnpf:portname2/proxy/: bar (200; 22.646351ms) Dec 15 13:47:07.934: INFO: (7) /api/v1/namespaces/proxy-8145/services/https:proxy-service-bcnpf:tlsportname2/proxy/: tls qux (200; 23.429318ms) Dec 15 13:47:07.934: INFO: (7) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw:160/proxy/: foo (200; 23.080344ms) Dec 15 13:47:07.936: INFO: (7) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:1080/proxy/: ... (200; 25.346567ms) Dec 15 13:47:07.937: INFO: (7) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw:1080/proxy/: test<... (200; 25.582086ms) Dec 15 13:47:07.937: INFO: (7) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:160/proxy/: foo (200; 25.492739ms) Dec 15 13:47:07.937: INFO: (7) /api/v1/namespaces/proxy-8145/services/http:proxy-service-bcnpf:portname2/proxy/: bar (200; 25.654464ms) Dec 15 13:47:07.938: INFO: (7) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:460/proxy/: tls baz (200; 26.967855ms) Dec 15 13:47:07.938: INFO: (7) /api/v1/namespaces/proxy-8145/services/https:proxy-service-bcnpf:tlsportname1/proxy/: tls baz (200; 26.958817ms) Dec 15 13:47:07.939: INFO: (7) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:162/proxy/: bar (200; 27.688372ms) Dec 15 13:47:07.939: INFO: (7) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:443/proxy/: test (200; 27.253295ms) Dec 15 13:47:07.939: INFO: (7) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:462/proxy/: tls qux (200; 27.503796ms) Dec 15 13:47:07.939: INFO: (7) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw:162/proxy/: bar (200; 27.695269ms) Dec 15 13:47:07.949: INFO: (8) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:1080/proxy/: ... (200; 9.886541ms) Dec 15 13:47:07.951: INFO: (8) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:160/proxy/: foo (200; 11.595663ms) Dec 15 13:47:07.951: INFO: (8) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:460/proxy/: tls baz (200; 11.768052ms) Dec 15 13:47:07.951: INFO: (8) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw/proxy/: test (200; 11.346009ms) Dec 15 13:47:07.952: INFO: (8) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:162/proxy/: bar (200; 12.345256ms) Dec 15 13:47:07.952: INFO: (8) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw:160/proxy/: foo (200; 12.588057ms) Dec 15 13:47:07.952: INFO: (8) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:443/proxy/: test<... (200; 15.476553ms) Dec 15 13:47:07.956: INFO: (8) /api/v1/namespaces/proxy-8145/services/https:proxy-service-bcnpf:tlsportname2/proxy/: tls qux (200; 17.099369ms) Dec 15 13:47:07.956: INFO: (8) /api/v1/namespaces/proxy-8145/services/proxy-service-bcnpf:portname2/proxy/: bar (200; 16.938971ms) Dec 15 13:47:07.966: INFO: (9) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:460/proxy/: tls baz (200; 9.481284ms) Dec 15 13:47:07.967: INFO: (9) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:162/proxy/: bar (200; 10.191468ms) Dec 15 13:47:07.967: INFO: (9) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:1080/proxy/: ... (200; 10.931935ms) Dec 15 13:47:07.968: INFO: (9) /api/v1/namespaces/proxy-8145/services/https:proxy-service-bcnpf:tlsportname2/proxy/: tls qux (200; 11.666647ms) Dec 15 13:47:07.968: INFO: (9) /api/v1/namespaces/proxy-8145/services/proxy-service-bcnpf:portname1/proxy/: foo (200; 11.600618ms) Dec 15 13:47:07.969: INFO: (9) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw:160/proxy/: foo (200; 12.370558ms) Dec 15 13:47:07.969: INFO: (9) /api/v1/namespaces/proxy-8145/services/proxy-service-bcnpf:portname2/proxy/: bar (200; 12.440957ms) Dec 15 13:47:07.969: INFO: (9) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw:162/proxy/: bar (200; 12.655738ms) Dec 15 13:47:07.969: INFO: (9) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw/proxy/: test (200; 12.689269ms) Dec 15 13:47:07.970: INFO: (9) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:160/proxy/: foo (200; 12.955394ms) Dec 15 13:47:07.973: INFO: (9) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:462/proxy/: tls qux (200; 16.669556ms) Dec 15 13:47:07.974: INFO: (9) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:443/proxy/: test<... (200; 18.107963ms) Dec 15 13:47:07.984: INFO: (10) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:462/proxy/: tls qux (200; 9.272974ms) Dec 15 13:47:07.984: INFO: (10) /api/v1/namespaces/proxy-8145/services/http:proxy-service-bcnpf:portname1/proxy/: foo (200; 9.098956ms) Dec 15 13:47:07.985: INFO: (10) /api/v1/namespaces/proxy-8145/services/proxy-service-bcnpf:portname1/proxy/: foo (200; 9.14759ms) Dec 15 13:47:07.986: INFO: (10) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:460/proxy/: tls baz (200; 11.155331ms) Dec 15 13:47:07.986: INFO: (10) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw/proxy/: test (200; 10.926692ms) Dec 15 13:47:07.986: INFO: (10) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw:1080/proxy/: test<... (200; 10.889651ms) Dec 15 13:47:07.987: INFO: (10) /api/v1/namespaces/proxy-8145/services/http:proxy-service-bcnpf:portname2/proxy/: bar (200; 11.470366ms) Dec 15 13:47:07.987: INFO: (10) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:1080/proxy/: ... (200; 11.171676ms) Dec 15 13:47:07.987: INFO: (10) /api/v1/namespaces/proxy-8145/services/proxy-service-bcnpf:portname2/proxy/: bar (200; 11.603358ms) Dec 15 13:47:07.987: INFO: (10) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw:160/proxy/: foo (200; 11.364852ms) Dec 15 13:47:07.987: INFO: (10) /api/v1/namespaces/proxy-8145/services/https:proxy-service-bcnpf:tlsportname2/proxy/: tls qux (200; 11.978978ms) Dec 15 13:47:07.987: INFO: (10) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:162/proxy/: bar (200; 11.877469ms) Dec 15 13:47:07.988: INFO: (10) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:160/proxy/: foo (200; 12.384939ms) Dec 15 13:47:07.988: INFO: (10) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw:162/proxy/: bar (200; 12.554425ms) Dec 15 13:47:07.988: INFO: (10) /api/v1/namespaces/proxy-8145/services/https:proxy-service-bcnpf:tlsportname1/proxy/: tls baz (200; 12.741892ms) Dec 15 13:47:07.988: INFO: (10) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:443/proxy/: test (200; 17.990351ms) Dec 15 13:47:08.007: INFO: (11) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:160/proxy/: foo (200; 18.203087ms) Dec 15 13:47:08.007: INFO: (11) /api/v1/namespaces/proxy-8145/services/proxy-service-bcnpf:portname2/proxy/: bar (200; 18.48026ms) Dec 15 13:47:08.010: INFO: (11) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:443/proxy/: ... (200; 21.445277ms) Dec 15 13:47:08.010: INFO: (11) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw:162/proxy/: bar (200; 21.510729ms) Dec 15 13:47:08.010: INFO: (11) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw:1080/proxy/: test<... (200; 21.297406ms) Dec 15 13:47:08.010: INFO: (11) /api/v1/namespaces/proxy-8145/services/http:proxy-service-bcnpf:portname2/proxy/: bar (200; 21.572867ms) Dec 15 13:47:08.019: INFO: (12) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:460/proxy/: tls baz (200; 9.24016ms) Dec 15 13:47:08.019: INFO: (12) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw:160/proxy/: foo (200; 9.03152ms) Dec 15 13:47:08.020: INFO: (12) /api/v1/namespaces/proxy-8145/services/proxy-service-bcnpf:portname1/proxy/: foo (200; 9.713692ms) Dec 15 13:47:08.020: INFO: (12) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:160/proxy/: foo (200; 9.72109ms) Dec 15 13:47:08.020: INFO: (12) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:462/proxy/: tls qux (200; 9.889071ms) Dec 15 13:47:08.020: INFO: (12) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw:162/proxy/: bar (200; 10.041033ms) Dec 15 13:47:08.020: INFO: (12) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:1080/proxy/: ... (200; 9.885034ms) Dec 15 13:47:08.021: INFO: (12) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:162/proxy/: bar (200; 11.006594ms) Dec 15 13:47:08.022: INFO: (12) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw/proxy/: test (200; 11.555713ms) Dec 15 13:47:08.022: INFO: (12) /api/v1/namespaces/proxy-8145/services/proxy-service-bcnpf:portname2/proxy/: bar (200; 11.774164ms) Dec 15 13:47:08.022: INFO: (12) /api/v1/namespaces/proxy-8145/services/http:proxy-service-bcnpf:portname2/proxy/: bar (200; 12.008514ms) Dec 15 13:47:08.022: INFO: (12) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw:1080/proxy/: test<... (200; 12.055818ms) Dec 15 13:47:08.022: INFO: (12) /api/v1/namespaces/proxy-8145/services/https:proxy-service-bcnpf:tlsportname2/proxy/: tls qux (200; 12.021831ms) Dec 15 13:47:08.022: INFO: (12) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:443/proxy/: test<... (200; 14.032817ms) Dec 15 13:47:08.037: INFO: (13) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:460/proxy/: tls baz (200; 14.060051ms) Dec 15 13:47:08.037: INFO: (13) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:443/proxy/: test (200; 14.259455ms) Dec 15 13:47:08.037: INFO: (13) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:1080/proxy/: ... (200; 14.274635ms) Dec 15 13:47:08.037: INFO: (13) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:462/proxy/: tls qux (200; 14.620909ms) Dec 15 13:47:08.042: INFO: (13) /api/v1/namespaces/proxy-8145/services/proxy-service-bcnpf:portname1/proxy/: foo (200; 19.018833ms) Dec 15 13:47:08.042: INFO: (13) /api/v1/namespaces/proxy-8145/services/http:proxy-service-bcnpf:portname1/proxy/: foo (200; 19.324138ms) Dec 15 13:47:08.042: INFO: (13) /api/v1/namespaces/proxy-8145/services/proxy-service-bcnpf:portname2/proxy/: bar (200; 19.196906ms) Dec 15 13:47:08.043: INFO: (13) /api/v1/namespaces/proxy-8145/services/http:proxy-service-bcnpf:portname2/proxy/: bar (200; 19.964326ms) Dec 15 13:47:08.043: INFO: (13) /api/v1/namespaces/proxy-8145/services/https:proxy-service-bcnpf:tlsportname2/proxy/: tls qux (200; 20.400125ms) Dec 15 13:47:08.043: INFO: (13) /api/v1/namespaces/proxy-8145/services/https:proxy-service-bcnpf:tlsportname1/proxy/: tls baz (200; 20.508241ms) Dec 15 13:47:08.059: INFO: (14) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:1080/proxy/: ... (200; 14.976548ms) Dec 15 13:47:08.059: INFO: (14) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:162/proxy/: bar (200; 14.987021ms) Dec 15 13:47:08.061: INFO: (14) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:462/proxy/: tls qux (200; 16.711481ms) Dec 15 13:47:08.061: INFO: (14) /api/v1/namespaces/proxy-8145/services/http:proxy-service-bcnpf:portname2/proxy/: bar (200; 16.765021ms) Dec 15 13:47:08.061: INFO: (14) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:460/proxy/: tls baz (200; 17.037281ms) Dec 15 13:47:08.061: INFO: (14) /api/v1/namespaces/proxy-8145/services/https:proxy-service-bcnpf:tlsportname1/proxy/: tls baz (200; 17.234997ms) Dec 15 13:47:08.061: INFO: (14) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw/proxy/: test (200; 16.957058ms) Dec 15 13:47:08.061: INFO: (14) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:160/proxy/: foo (200; 16.869468ms) Dec 15 13:47:08.061: INFO: (14) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw:1080/proxy/: test<... (200; 17.051077ms) Dec 15 13:47:08.061: INFO: (14) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:443/proxy/: ... (200; 13.909111ms) Dec 15 13:47:08.080: INFO: (15) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw:1080/proxy/: test<... (200; 14.803446ms) Dec 15 13:47:08.080: INFO: (15) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:462/proxy/: tls qux (200; 14.497684ms) Dec 15 13:47:08.080: INFO: (15) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw:160/proxy/: foo (200; 15.009148ms) Dec 15 13:47:08.080: INFO: (15) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:160/proxy/: foo (200; 15.264372ms) Dec 15 13:47:08.080: INFO: (15) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:443/proxy/: test (200; 14.935622ms) Dec 15 13:47:08.080: INFO: (15) /api/v1/namespaces/proxy-8145/services/http:proxy-service-bcnpf:portname1/proxy/: foo (200; 15.291139ms) Dec 15 13:47:08.080: INFO: (15) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:162/proxy/: bar (200; 14.791891ms) Dec 15 13:47:08.081: INFO: (15) /api/v1/namespaces/proxy-8145/services/proxy-service-bcnpf:portname1/proxy/: foo (200; 14.949056ms) Dec 15 13:47:08.081: INFO: (15) /api/v1/namespaces/proxy-8145/services/proxy-service-bcnpf:portname2/proxy/: bar (200; 15.867427ms) Dec 15 13:47:08.087: INFO: (16) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:1080/proxy/: ... (200; 6.082317ms) Dec 15 13:47:08.088: INFO: (16) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw/proxy/: test (200; 6.119608ms) Dec 15 13:47:08.091: INFO: (16) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw:1080/proxy/: test<... (200; 10.057448ms) Dec 15 13:47:08.092: INFO: (16) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:460/proxy/: tls baz (200; 10.185805ms) Dec 15 13:47:08.092: INFO: (16) /api/v1/namespaces/proxy-8145/services/http:proxy-service-bcnpf:portname1/proxy/: foo (200; 10.67198ms) Dec 15 13:47:08.092: INFO: (16) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw:162/proxy/: bar (200; 10.891748ms) Dec 15 13:47:08.093: INFO: (16) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:160/proxy/: foo (200; 11.187964ms) Dec 15 13:47:08.094: INFO: (16) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw:160/proxy/: foo (200; 12.77792ms) Dec 15 13:47:08.095: INFO: (16) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:443/proxy/: ... (200; 5.076522ms) Dec 15 13:47:08.110: INFO: (17) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw:162/proxy/: bar (200; 10.35033ms) Dec 15 13:47:08.110: INFO: (17) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:443/proxy/: test<... (200; 13.342909ms) Dec 15 13:47:08.113: INFO: (17) /api/v1/namespaces/proxy-8145/services/proxy-service-bcnpf:portname1/proxy/: foo (200; 13.4589ms) Dec 15 13:47:08.113: INFO: (17) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw:160/proxy/: foo (200; 13.574211ms) Dec 15 13:47:08.113: INFO: (17) /api/v1/namespaces/proxy-8145/services/proxy-service-bcnpf:portname2/proxy/: bar (200; 13.533487ms) Dec 15 13:47:08.113: INFO: (17) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:162/proxy/: bar (200; 13.560615ms) Dec 15 13:47:08.113: INFO: (17) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw/proxy/: test (200; 13.669955ms) Dec 15 13:47:08.127: INFO: (18) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:460/proxy/: tls baz (200; 14.197725ms) Dec 15 13:47:08.127: INFO: (18) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:462/proxy/: tls qux (200; 14.381463ms) Dec 15 13:47:08.127: INFO: (18) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:443/proxy/: test<... (200; 14.818581ms) Dec 15 13:47:08.128: INFO: (18) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw/proxy/: test (200; 14.941006ms) Dec 15 13:47:08.128: INFO: (18) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:1080/proxy/: ... (200; 14.810985ms) Dec 15 13:47:08.128: INFO: (18) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:160/proxy/: foo (200; 14.985103ms) Dec 15 13:47:08.128: INFO: (18) /api/v1/namespaces/proxy-8145/services/https:proxy-service-bcnpf:tlsportname1/proxy/: tls baz (200; 14.871404ms) Dec 15 13:47:08.129: INFO: (18) /api/v1/namespaces/proxy-8145/services/http:proxy-service-bcnpf:portname2/proxy/: bar (200; 15.438362ms) Dec 15 13:47:08.129: INFO: (18) /api/v1/namespaces/proxy-8145/services/proxy-service-bcnpf:portname1/proxy/: foo (200; 15.780214ms) Dec 15 13:47:08.129: INFO: (18) /api/v1/namespaces/proxy-8145/services/http:proxy-service-bcnpf:portname1/proxy/: foo (200; 15.593392ms) Dec 15 13:47:08.129: INFO: (18) /api/v1/namespaces/proxy-8145/services/https:proxy-service-bcnpf:tlsportname2/proxy/: tls qux (200; 15.965705ms) Dec 15 13:47:08.129: INFO: (18) /api/v1/namespaces/proxy-8145/services/proxy-service-bcnpf:portname2/proxy/: bar (200; 15.929ms) Dec 15 13:47:08.140: INFO: (19) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:162/proxy/: bar (200; 11.24502ms) Dec 15 13:47:08.140: INFO: (19) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw:162/proxy/: bar (200; 11.306038ms) Dec 15 13:47:08.141: INFO: (19) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw/proxy/: test (200; 11.175275ms) Dec 15 13:47:08.141: INFO: (19) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:462/proxy/: tls qux (200; 11.933383ms) Dec 15 13:47:08.141: INFO: (19) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:443/proxy/: test<... (200; 11.901739ms) Dec 15 13:47:08.141: INFO: (19) /api/v1/namespaces/proxy-8145/pods/https:proxy-service-bcnpf-tg7lw:460/proxy/: tls baz (200; 12.226319ms) Dec 15 13:47:08.141: INFO: (19) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:1080/proxy/: ... (200; 12.472011ms) Dec 15 13:47:08.142: INFO: (19) /api/v1/namespaces/proxy-8145/pods/proxy-service-bcnpf-tg7lw:160/proxy/: foo (200; 12.490953ms) Dec 15 13:47:08.142: INFO: (19) /api/v1/namespaces/proxy-8145/pods/http:proxy-service-bcnpf-tg7lw:160/proxy/: foo (200; 13.000023ms) Dec 15 13:47:08.144: INFO: (19) /api/v1/namespaces/proxy-8145/services/proxy-service-bcnpf:portname2/proxy/: bar (200; 15.099173ms) Dec 15 13:47:08.144: INFO: (19) /api/v1/namespaces/proxy-8145/services/proxy-service-bcnpf:portname1/proxy/: foo (200; 15.267162ms) Dec 15 13:47:08.146: INFO: (19) /api/v1/namespaces/proxy-8145/services/http:proxy-service-bcnpf:portname1/proxy/: foo (200; 16.515976ms) Dec 15 13:47:08.146: INFO: (19) /api/v1/namespaces/proxy-8145/services/http:proxy-service-bcnpf:portname2/proxy/: bar (200; 17.007579ms) Dec 15 13:47:08.146: INFO: (19) /api/v1/namespaces/proxy-8145/services/https:proxy-service-bcnpf:tlsportname1/proxy/: tls baz (200; 17.253861ms) Dec 15 13:47:08.147: INFO: (19) /api/v1/namespaces/proxy-8145/services/https:proxy-service-bcnpf:tlsportname2/proxy/: tls qux (200; 17.856245ms) STEP: deleting ReplicationController proxy-service-bcnpf in namespace proxy-8145, will wait for the garbage collector to delete the pods Dec 15 13:47:08.219: INFO: Deleting ReplicationController proxy-service-bcnpf took: 18.217563ms Dec 15 13:47:08.519: INFO: Terminating ReplicationController proxy-service-bcnpf pods took: 300.603134ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:47:16.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-8145" for this suite. Dec 15 13:47:22.663: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:47:22.775: INFO: namespace proxy-8145 deletion completed in 6.145022343s • [SLOW TEST:30.510 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:47:22.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Dec 15 13:47:22.912: INFO: Waiting up to 5m0s for pod "pod-e15e5c5c-506d-4297-b6d5-9e86e822bde9" in namespace "emptydir-8412" to be "success or failure" Dec 15 13:47:22.927: INFO: Pod "pod-e15e5c5c-506d-4297-b6d5-9e86e822bde9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.838029ms Dec 15 13:47:24.947: INFO: Pod "pod-e15e5c5c-506d-4297-b6d5-9e86e822bde9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034418138s Dec 15 13:47:26.955: INFO: Pod "pod-e15e5c5c-506d-4297-b6d5-9e86e822bde9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042637612s Dec 15 13:47:28.962: INFO: Pod "pod-e15e5c5c-506d-4297-b6d5-9e86e822bde9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050256987s Dec 15 13:47:30.978: INFO: Pod "pod-e15e5c5c-506d-4297-b6d5-9e86e822bde9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065794794s Dec 15 13:47:32.986: INFO: Pod "pod-e15e5c5c-506d-4297-b6d5-9e86e822bde9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.074146514s STEP: Saw pod success Dec 15 13:47:32.986: INFO: Pod "pod-e15e5c5c-506d-4297-b6d5-9e86e822bde9" satisfied condition "success or failure" Dec 15 13:47:32.991: INFO: Trying to get logs from node iruya-node pod pod-e15e5c5c-506d-4297-b6d5-9e86e822bde9 container test-container: STEP: delete the pod Dec 15 13:47:33.036: INFO: Waiting for pod pod-e15e5c5c-506d-4297-b6d5-9e86e822bde9 to disappear Dec 15 13:47:33.113: INFO: Pod pod-e15e5c5c-506d-4297-b6d5-9e86e822bde9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:47:33.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8412" for this suite. Dec 15 13:47:39.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:47:39.324: INFO: namespace emptydir-8412 deletion completed in 6.204454335s • [SLOW TEST:16.548 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:47:39.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Dec 15 13:47:39.469: INFO: Waiting up to 5m0s for pod "pod-5f31845f-dce7-4d66-9d69-4cdbfa87262c" in namespace "emptydir-1162" to be "success or failure" Dec 15 13:47:39.483: INFO: Pod "pod-5f31845f-dce7-4d66-9d69-4cdbfa87262c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.205165ms Dec 15 13:47:41.495: INFO: Pod "pod-5f31845f-dce7-4d66-9d69-4cdbfa87262c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026471676s Dec 15 13:47:43.504: INFO: Pod "pod-5f31845f-dce7-4d66-9d69-4cdbfa87262c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034923554s Dec 15 13:47:45.512: INFO: Pod "pod-5f31845f-dce7-4d66-9d69-4cdbfa87262c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043226074s Dec 15 13:47:47.530: INFO: Pod "pod-5f31845f-dce7-4d66-9d69-4cdbfa87262c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061031183s Dec 15 13:47:49.548: INFO: Pod "pod-5f31845f-dce7-4d66-9d69-4cdbfa87262c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.078750578s STEP: Saw pod success Dec 15 13:47:49.548: INFO: Pod "pod-5f31845f-dce7-4d66-9d69-4cdbfa87262c" satisfied condition "success or failure" Dec 15 13:47:49.568: INFO: Trying to get logs from node iruya-node pod pod-5f31845f-dce7-4d66-9d69-4cdbfa87262c container test-container: STEP: delete the pod Dec 15 13:47:49.684: INFO: Waiting for pod pod-5f31845f-dce7-4d66-9d69-4cdbfa87262c to disappear Dec 15 13:47:49.713: INFO: Pod pod-5f31845f-dce7-4d66-9d69-4cdbfa87262c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:47:49.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1162" for this suite. Dec 15 13:47:55.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:47:55.916: INFO: namespace emptydir-1162 deletion completed in 6.190195662s • [SLOW TEST:16.591 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:47:55.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-4359 STEP: creating a selector STEP: Creating the service pods in kubernetes Dec 15 13:47:56.048: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Dec 15 13:48:36.564: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4359 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 15 13:48:36.564: INFO: >>> kubeConfig: /root/.kube/config Dec 15 13:48:38.047: INFO: Found all expected endpoints: [netserver-0] Dec 15 13:48:38.057: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4359 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 15 13:48:38.057: INFO: >>> kubeConfig: /root/.kube/config Dec 15 13:48:39.492: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:48:39.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4359" for this suite. Dec 15 13:49:03.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:49:03.680: INFO: namespace pod-network-test-4359 deletion completed in 24.174748683s • [SLOW TEST:67.763 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:49:03.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Dec 15 13:49:03.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-749' Dec 15 13:49:04.132: INFO: stderr: "" Dec 15 13:49:04.132: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Dec 15 13:49:05.142: INFO: Selector matched 1 pods for map[app:redis] Dec 15 13:49:05.142: INFO: Found 0 / 1 Dec 15 13:49:06.145: INFO: Selector matched 1 pods for map[app:redis] Dec 15 13:49:06.145: INFO: Found 0 / 1 Dec 15 13:49:07.147: INFO: Selector matched 1 pods for map[app:redis] Dec 15 13:49:07.147: INFO: Found 0 / 1 Dec 15 13:49:08.148: INFO: Selector matched 1 pods for map[app:redis] Dec 15 13:49:08.148: INFO: Found 0 / 1 Dec 15 13:49:09.145: INFO: Selector matched 1 pods for map[app:redis] Dec 15 13:49:09.145: INFO: Found 0 / 1 Dec 15 13:49:10.144: INFO: Selector matched 1 pods for map[app:redis] Dec 15 13:49:10.144: INFO: Found 0 / 1 Dec 15 13:49:11.141: INFO: Selector matched 1 pods for map[app:redis] Dec 15 13:49:11.141: INFO: Found 1 / 1 Dec 15 13:49:11.141: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Dec 15 13:49:11.147: INFO: Selector matched 1 pods for map[app:redis] Dec 15 13:49:11.147: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Dec 15 13:49:11.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-rbd4v --namespace=kubectl-749 -p {"metadata":{"annotations":{"x":"y"}}}' Dec 15 13:49:11.353: INFO: stderr: "" Dec 15 13:49:11.353: INFO: stdout: "pod/redis-master-rbd4v patched\n" STEP: checking annotations Dec 15 13:49:11.368: INFO: Selector matched 1 pods for map[app:redis] Dec 15 13:49:11.368: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 15 13:49:11.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-749" for this suite. Dec 15 13:49:33.470: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 15 13:49:33.637: INFO: namespace kubectl-749 deletion completed in 22.260393035s • [SLOW TEST:29.956 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 15 13:49:33.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 15 13:49:33.986: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 40.631578ms)
Dec 15 13:49:34.011: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 25.083023ms)
Dec 15 13:49:34.024: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.340797ms)
Dec 15 13:49:34.035: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.681595ms)
Dec 15 13:49:34.073: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 37.980372ms)
Dec 15 13:49:34.095: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 21.599397ms)
Dec 15 13:49:34.107: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.355692ms)
Dec 15 13:49:34.115: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.823446ms)
Dec 15 13:49:34.123: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.244439ms)
Dec 15 13:49:34.127: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.966601ms)
Dec 15 13:49:34.131: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.362297ms)
Dec 15 13:49:34.137: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.861745ms)
Dec 15 13:49:34.143: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.149284ms)
Dec 15 13:49:34.152: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.504637ms)
Dec 15 13:49:34.159: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.00722ms)
Dec 15 13:49:34.165: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.52801ms)
Dec 15 13:49:34.170: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.824698ms)
Dec 15 13:49:34.177: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.938041ms)
Dec 15 13:49:34.182: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.307831ms)
Dec 15 13:49:34.188: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.663421ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 13:49:34.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4801" for this suite.
Dec 15 13:49:40.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 13:49:40.339: INFO: namespace proxy-4801 deletion completed in 6.146647367s

• [SLOW TEST:6.702 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 13:49:40.340: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 15 13:49:40.441: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ef0e3c02-4149-4ea4-99e4-f216b8b67376" in namespace "projected-9686" to be "success or failure"
Dec 15 13:49:40.468: INFO: Pod "downwardapi-volume-ef0e3c02-4149-4ea4-99e4-f216b8b67376": Phase="Pending", Reason="", readiness=false. Elapsed: 26.382612ms
Dec 15 13:49:42.485: INFO: Pod "downwardapi-volume-ef0e3c02-4149-4ea4-99e4-f216b8b67376": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043410074s
Dec 15 13:49:44.497: INFO: Pod "downwardapi-volume-ef0e3c02-4149-4ea4-99e4-f216b8b67376": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056191244s
Dec 15 13:49:46.519: INFO: Pod "downwardapi-volume-ef0e3c02-4149-4ea4-99e4-f216b8b67376": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078213634s
Dec 15 13:49:48.545: INFO: Pod "downwardapi-volume-ef0e3c02-4149-4ea4-99e4-f216b8b67376": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.103307604s
STEP: Saw pod success
Dec 15 13:49:48.545: INFO: Pod "downwardapi-volume-ef0e3c02-4149-4ea4-99e4-f216b8b67376" satisfied condition "success or failure"
Dec 15 13:49:48.576: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-ef0e3c02-4149-4ea4-99e4-f216b8b67376 container client-container: 
STEP: delete the pod
Dec 15 13:49:48.656: INFO: Waiting for pod downwardapi-volume-ef0e3c02-4149-4ea4-99e4-f216b8b67376 to disappear
Dec 15 13:49:48.662: INFO: Pod downwardapi-volume-ef0e3c02-4149-4ea4-99e4-f216b8b67376 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 13:49:48.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9686" for this suite.
Dec 15 13:49:54.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 13:49:54.838: INFO: namespace projected-9686 deletion completed in 6.16525583s

• [SLOW TEST:14.498 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 13:49:54.842: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 15 13:49:55.011: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e53f3ce4-f801-43c7-ac85-dc658fe860ec" in namespace "downward-api-2683" to be "success or failure"
Dec 15 13:49:55.115: INFO: Pod "downwardapi-volume-e53f3ce4-f801-43c7-ac85-dc658fe860ec": Phase="Pending", Reason="", readiness=false. Elapsed: 103.485375ms
Dec 15 13:49:57.171: INFO: Pod "downwardapi-volume-e53f3ce4-f801-43c7-ac85-dc658fe860ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.159395285s
Dec 15 13:49:59.185: INFO: Pod "downwardapi-volume-e53f3ce4-f801-43c7-ac85-dc658fe860ec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.173077041s
Dec 15 13:50:01.476: INFO: Pod "downwardapi-volume-e53f3ce4-f801-43c7-ac85-dc658fe860ec": Phase="Pending", Reason="", readiness=false. Elapsed: 6.464168364s
Dec 15 13:50:03.483: INFO: Pod "downwardapi-volume-e53f3ce4-f801-43c7-ac85-dc658fe860ec": Phase="Pending", Reason="", readiness=false. Elapsed: 8.471393677s
Dec 15 13:50:05.491: INFO: Pod "downwardapi-volume-e53f3ce4-f801-43c7-ac85-dc658fe860ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.479775812s
STEP: Saw pod success
Dec 15 13:50:05.491: INFO: Pod "downwardapi-volume-e53f3ce4-f801-43c7-ac85-dc658fe860ec" satisfied condition "success or failure"
Dec 15 13:50:05.496: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-e53f3ce4-f801-43c7-ac85-dc658fe860ec container client-container: 
STEP: delete the pod
Dec 15 13:50:05.877: INFO: Waiting for pod downwardapi-volume-e53f3ce4-f801-43c7-ac85-dc658fe860ec to disappear
Dec 15 13:50:05.965: INFO: Pod downwardapi-volume-e53f3ce4-f801-43c7-ac85-dc658fe860ec no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 13:50:05.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2683" for this suite.
Dec 15 13:50:12.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 13:50:12.161: INFO: namespace downward-api-2683 deletion completed in 6.178107111s

• [SLOW TEST:17.319 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 13:50:12.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Dec 15 13:50:21.418: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 13:50:21.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-6949" for this suite.
Dec 15 13:50:45.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 13:50:45.815: INFO: namespace replicaset-6949 deletion completed in 24.239139874s

• [SLOW TEST:33.653 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 13:50:45.816: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Dec 15 13:50:45.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Dec 15 13:50:46.092: INFO: stderr: ""
Dec 15 13:50:46.092: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 13:50:46.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1791" for this suite.
Dec 15 13:50:52.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 13:50:52.337: INFO: namespace kubectl-1791 deletion completed in 6.237313461s

• [SLOW TEST:6.522 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 13:50:52.339: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Dec 15 13:50:52.584: INFO: Number of nodes with available pods: 0
Dec 15 13:50:52.585: INFO: Node iruya-node is running more than one daemon pod
Dec 15 13:50:53.606: INFO: Number of nodes with available pods: 0
Dec 15 13:50:53.606: INFO: Node iruya-node is running more than one daemon pod
Dec 15 13:50:54.809: INFO: Number of nodes with available pods: 0
Dec 15 13:50:54.809: INFO: Node iruya-node is running more than one daemon pod
Dec 15 13:50:55.605: INFO: Number of nodes with available pods: 0
Dec 15 13:50:55.605: INFO: Node iruya-node is running more than one daemon pod
Dec 15 13:50:56.604: INFO: Number of nodes with available pods: 0
Dec 15 13:50:56.604: INFO: Node iruya-node is running more than one daemon pod
Dec 15 13:50:58.213: INFO: Number of nodes with available pods: 0
Dec 15 13:50:58.214: INFO: Node iruya-node is running more than one daemon pod
Dec 15 13:50:58.658: INFO: Number of nodes with available pods: 0
Dec 15 13:50:58.658: INFO: Node iruya-node is running more than one daemon pod
Dec 15 13:50:59.618: INFO: Number of nodes with available pods: 0
Dec 15 13:50:59.618: INFO: Node iruya-node is running more than one daemon pod
Dec 15 13:51:00.611: INFO: Number of nodes with available pods: 0
Dec 15 13:51:00.611: INFO: Node iruya-node is running more than one daemon pod
Dec 15 13:51:01.602: INFO: Number of nodes with available pods: 0
Dec 15 13:51:01.602: INFO: Node iruya-node is running more than one daemon pod
Dec 15 13:51:02.643: INFO: Number of nodes with available pods: 2
Dec 15 13:51:02.643: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Dec 15 13:51:02.693: INFO: Number of nodes with available pods: 1
Dec 15 13:51:02.693: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 15 13:51:03.723: INFO: Number of nodes with available pods: 1
Dec 15 13:51:03.724: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 15 13:51:04.771: INFO: Number of nodes with available pods: 1
Dec 15 13:51:04.771: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 15 13:51:05.723: INFO: Number of nodes with available pods: 1
Dec 15 13:51:05.723: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 15 13:51:06.731: INFO: Number of nodes with available pods: 1
Dec 15 13:51:06.731: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 15 13:51:07.713: INFO: Number of nodes with available pods: 1
Dec 15 13:51:07.713: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 15 13:51:08.726: INFO: Number of nodes with available pods: 1
Dec 15 13:51:08.727: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 15 13:51:09.712: INFO: Number of nodes with available pods: 1
Dec 15 13:51:09.712: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 15 13:51:10.714: INFO: Number of nodes with available pods: 1
Dec 15 13:51:10.715: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 15 13:51:11.710: INFO: Number of nodes with available pods: 1
Dec 15 13:51:11.710: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 15 13:51:12.708: INFO: Number of nodes with available pods: 1
Dec 15 13:51:12.708: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 15 13:51:13.719: INFO: Number of nodes with available pods: 1
Dec 15 13:51:13.719: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 15 13:51:14.711: INFO: Number of nodes with available pods: 1
Dec 15 13:51:14.711: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 15 13:51:15.711: INFO: Number of nodes with available pods: 1
Dec 15 13:51:15.711: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 15 13:51:16.716: INFO: Number of nodes with available pods: 1
Dec 15 13:51:16.716: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 15 13:51:17.718: INFO: Number of nodes with available pods: 1
Dec 15 13:51:17.718: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 15 13:51:18.726: INFO: Number of nodes with available pods: 1
Dec 15 13:51:18.727: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 15 13:51:19.716: INFO: Number of nodes with available pods: 1
Dec 15 13:51:19.716: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 15 13:51:20.711: INFO: Number of nodes with available pods: 1
Dec 15 13:51:20.711: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 15 13:51:21.752: INFO: Number of nodes with available pods: 1
Dec 15 13:51:21.752: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 15 13:51:22.713: INFO: Number of nodes with available pods: 1
Dec 15 13:51:22.713: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 15 13:51:24.276: INFO: Number of nodes with available pods: 1
Dec 15 13:51:24.276: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 15 13:51:24.709: INFO: Number of nodes with available pods: 1
Dec 15 13:51:24.709: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 15 13:51:25.780: INFO: Number of nodes with available pods: 1
Dec 15 13:51:25.780: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 15 13:51:26.717: INFO: Number of nodes with available pods: 2
Dec 15 13:51:26.717: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4196, will wait for the garbage collector to delete the pods
Dec 15 13:51:26.786: INFO: Deleting DaemonSet.extensions daemon-set took: 12.592975ms
Dec 15 13:51:27.086: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.605838ms
Dec 15 13:51:34.107: INFO: Number of nodes with available pods: 0
Dec 15 13:51:34.107: INFO: Number of running nodes: 0, number of available pods: 0
Dec 15 13:51:34.112: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4196/daemonsets","resourceVersion":"16766630"},"items":null}

Dec 15 13:51:34.116: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4196/pods","resourceVersion":"16766630"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 13:51:34.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4196" for this suite.
Dec 15 13:51:40.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 13:51:40.270: INFO: namespace daemonsets-4196 deletion completed in 6.136550695s

• [SLOW TEST:47.931 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 13:51:40.271: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 15 13:51:40.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-8492'
Dec 15 13:51:40.544: INFO: stderr: ""
Dec 15 13:51:40.544: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Dec 15 13:51:50.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-8492 -o json'
Dec 15 13:51:50.750: INFO: stderr: ""
Dec 15 13:51:50.751: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2019-12-15T13:51:40Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-8492\",\n        \"resourceVersion\": \"16766683\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-8492/pods/e2e-test-nginx-pod\",\n        \"uid\": \"cf09f7e7-2d6c-4e2d-9a32-4c3e46e9fb81\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-jknvm\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-jknvm\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-jknvm\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-15T13:51:40Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-15T13:51:47Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-15T13:51:47Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-15T13:51:40Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://379408a94bf3670194c011d7af1a5018306e4f701a58912c68b1e4d667382f02\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2019-12-15T13:51:47Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.3.65\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2019-12-15T13:51:40Z\"\n    }\n}\n"
STEP: replace the image in the pod
Dec 15 13:51:50.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-8492'
Dec 15 13:51:51.382: INFO: stderr: ""
Dec 15 13:51:51.382: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Dec 15 13:51:51.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-8492'
Dec 15 13:51:57.792: INFO: stderr: ""
Dec 15 13:51:57.792: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 13:51:57.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8492" for this suite.
Dec 15 13:52:03.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 13:52:04.084: INFO: namespace kubectl-8492 deletion completed in 6.247088864s

• [SLOW TEST:23.814 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 13:52:04.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 15 13:52:04.182: INFO: Waiting up to 5m0s for pod "downwardapi-volume-03d33509-77b9-43ef-926e-d086e0f97824" in namespace "downward-api-8974" to be "success or failure"
Dec 15 13:52:04.214: INFO: Pod "downwardapi-volume-03d33509-77b9-43ef-926e-d086e0f97824": Phase="Pending", Reason="", readiness=false. Elapsed: 32.102894ms
Dec 15 13:52:06.225: INFO: Pod "downwardapi-volume-03d33509-77b9-43ef-926e-d086e0f97824": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042409898s
Dec 15 13:52:08.235: INFO: Pod "downwardapi-volume-03d33509-77b9-43ef-926e-d086e0f97824": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052490571s
Dec 15 13:52:10.249: INFO: Pod "downwardapi-volume-03d33509-77b9-43ef-926e-d086e0f97824": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067090615s
Dec 15 13:52:12.261: INFO: Pod "downwardapi-volume-03d33509-77b9-43ef-926e-d086e0f97824": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.078659535s
STEP: Saw pod success
Dec 15 13:52:12.261: INFO: Pod "downwardapi-volume-03d33509-77b9-43ef-926e-d086e0f97824" satisfied condition "success or failure"
Dec 15 13:52:12.265: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-03d33509-77b9-43ef-926e-d086e0f97824 container client-container: 
STEP: delete the pod
Dec 15 13:52:12.464: INFO: Waiting for pod downwardapi-volume-03d33509-77b9-43ef-926e-d086e0f97824 to disappear
Dec 15 13:52:12.479: INFO: Pod downwardapi-volume-03d33509-77b9-43ef-926e-d086e0f97824 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 13:52:12.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8974" for this suite.
Dec 15 13:52:18.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 13:52:18.658: INFO: namespace downward-api-8974 deletion completed in 6.169593608s

• [SLOW TEST:14.571 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 13:52:18.659: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Dec 15 13:52:26.978: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 13:52:27.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3486" for this suite.
Dec 15 13:52:33.399: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 13:52:33.545: INFO: namespace container-runtime-3486 deletion completed in 6.433773782s

• [SLOW TEST:14.886 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 13:52:33.545: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Dec 15 13:52:33.676: INFO: Pod name pod-release: Found 0 pods out of 1
Dec 15 13:52:38.686: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 13:52:38.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2266" for this suite.
Dec 15 13:52:44.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 13:52:45.229: INFO: namespace replication-controller-2266 deletion completed in 6.336702928s

• [SLOW TEST:11.684 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 13:52:45.232: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 15 13:52:45.329: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0be4a6ab-2253-4472-b2f8-20c54374e835" in namespace "downward-api-9758" to be "success or failure"
Dec 15 13:52:45.511: INFO: Pod "downwardapi-volume-0be4a6ab-2253-4472-b2f8-20c54374e835": Phase="Pending", Reason="", readiness=false. Elapsed: 181.130508ms
Dec 15 13:52:47.518: INFO: Pod "downwardapi-volume-0be4a6ab-2253-4472-b2f8-20c54374e835": Phase="Pending", Reason="", readiness=false. Elapsed: 2.187998017s
Dec 15 13:52:49.526: INFO: Pod "downwardapi-volume-0be4a6ab-2253-4472-b2f8-20c54374e835": Phase="Pending", Reason="", readiness=false. Elapsed: 4.196723699s
Dec 15 13:52:51.544: INFO: Pod "downwardapi-volume-0be4a6ab-2253-4472-b2f8-20c54374e835": Phase="Pending", Reason="", readiness=false. Elapsed: 6.214503781s
Dec 15 13:52:53.557: INFO: Pod "downwardapi-volume-0be4a6ab-2253-4472-b2f8-20c54374e835": Phase="Pending", Reason="", readiness=false. Elapsed: 8.226949769s
Dec 15 13:52:55.568: INFO: Pod "downwardapi-volume-0be4a6ab-2253-4472-b2f8-20c54374e835": Phase="Pending", Reason="", readiness=false. Elapsed: 10.238625652s
Dec 15 13:52:57.577: INFO: Pod "downwardapi-volume-0be4a6ab-2253-4472-b2f8-20c54374e835": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.247169256s
STEP: Saw pod success
Dec 15 13:52:57.577: INFO: Pod "downwardapi-volume-0be4a6ab-2253-4472-b2f8-20c54374e835" satisfied condition "success or failure"
Dec 15 13:52:57.580: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-0be4a6ab-2253-4472-b2f8-20c54374e835 container client-container: 
STEP: delete the pod
Dec 15 13:52:57.631: INFO: Waiting for pod downwardapi-volume-0be4a6ab-2253-4472-b2f8-20c54374e835 to disappear
Dec 15 13:52:57.643: INFO: Pod downwardapi-volume-0be4a6ab-2253-4472-b2f8-20c54374e835 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 13:52:57.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9758" for this suite.
Dec 15 13:53:03.814: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 13:53:03.928: INFO: namespace downward-api-9758 deletion completed in 6.276741567s

• [SLOW TEST:18.697 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 13:53:03.930: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Dec 15 13:53:04.142: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 13:53:26.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8362" for this suite.
Dec 15 13:53:32.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 13:53:32.862: INFO: namespace pods-8362 deletion completed in 6.197315719s

• [SLOW TEST:28.933 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 13:53:32.866: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-3140f010-b91b-4de3-b2b2-345b5ea34a2b
STEP: Creating a pod to test consume configMaps
Dec 15 13:53:33.021: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5550d915-48cd-4933-95c1-29bb522cc9d6" in namespace "projected-8414" to be "success or failure"
Dec 15 13:53:33.189: INFO: Pod "pod-projected-configmaps-5550d915-48cd-4933-95c1-29bb522cc9d6": Phase="Pending", Reason="", readiness=false. Elapsed: 167.081372ms
Dec 15 13:53:35.197: INFO: Pod "pod-projected-configmaps-5550d915-48cd-4933-95c1-29bb522cc9d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.174844556s
Dec 15 13:53:37.209: INFO: Pod "pod-projected-configmaps-5550d915-48cd-4933-95c1-29bb522cc9d6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.187081274s
Dec 15 13:53:39.226: INFO: Pod "pod-projected-configmaps-5550d915-48cd-4933-95c1-29bb522cc9d6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.203925889s
Dec 15 13:53:41.261: INFO: Pod "pod-projected-configmaps-5550d915-48cd-4933-95c1-29bb522cc9d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.23895351s
STEP: Saw pod success
Dec 15 13:53:41.261: INFO: Pod "pod-projected-configmaps-5550d915-48cd-4933-95c1-29bb522cc9d6" satisfied condition "success or failure"
Dec 15 13:53:41.264: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-5550d915-48cd-4933-95c1-29bb522cc9d6 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 15 13:53:41.318: INFO: Waiting for pod pod-projected-configmaps-5550d915-48cd-4933-95c1-29bb522cc9d6 to disappear
Dec 15 13:53:41.323: INFO: Pod pod-projected-configmaps-5550d915-48cd-4933-95c1-29bb522cc9d6 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 13:53:41.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8414" for this suite.
Dec 15 13:53:47.355: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 13:53:47.487: INFO: namespace projected-8414 deletion completed in 6.157574142s

• [SLOW TEST:14.621 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 13:53:47.487: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 15 13:53:47.598: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ab0bc94d-e868-40f1-b583-7cafbcfbb0f8" in namespace "projected-1954" to be "success or failure"
Dec 15 13:53:47.639: INFO: Pod "downwardapi-volume-ab0bc94d-e868-40f1-b583-7cafbcfbb0f8": Phase="Pending", Reason="", readiness=false. Elapsed: 41.292851ms
Dec 15 13:53:49.646: INFO: Pod "downwardapi-volume-ab0bc94d-e868-40f1-b583-7cafbcfbb0f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047937216s
Dec 15 13:53:51.660: INFO: Pod "downwardapi-volume-ab0bc94d-e868-40f1-b583-7cafbcfbb0f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062509633s
Dec 15 13:53:53.676: INFO: Pod "downwardapi-volume-ab0bc94d-e868-40f1-b583-7cafbcfbb0f8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077860058s
Dec 15 13:53:55.702: INFO: Pod "downwardapi-volume-ab0bc94d-e868-40f1-b583-7cafbcfbb0f8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.10386237s
Dec 15 13:53:57.714: INFO: Pod "downwardapi-volume-ab0bc94d-e868-40f1-b583-7cafbcfbb0f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.115889878s
STEP: Saw pod success
Dec 15 13:53:57.714: INFO: Pod "downwardapi-volume-ab0bc94d-e868-40f1-b583-7cafbcfbb0f8" satisfied condition "success or failure"
Dec 15 13:53:57.719: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-ab0bc94d-e868-40f1-b583-7cafbcfbb0f8 container client-container: 
STEP: delete the pod
Dec 15 13:53:57.826: INFO: Waiting for pod downwardapi-volume-ab0bc94d-e868-40f1-b583-7cafbcfbb0f8 to disappear
Dec 15 13:53:57.841: INFO: Pod downwardapi-volume-ab0bc94d-e868-40f1-b583-7cafbcfbb0f8 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 13:53:57.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1954" for this suite.
Dec 15 13:54:03.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 13:54:04.036: INFO: namespace projected-1954 deletion completed in 6.185258982s

• [SLOW TEST:16.549 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 13:54:04.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 15 13:54:04.164: INFO: Waiting up to 5m0s for pod "pod-56ab7a05-f83c-429f-b170-5260bd83f858" in namespace "emptydir-6116" to be "success or failure"
Dec 15 13:54:04.178: INFO: Pod "pod-56ab7a05-f83c-429f-b170-5260bd83f858": Phase="Pending", Reason="", readiness=false. Elapsed: 13.303909ms
Dec 15 13:54:06.192: INFO: Pod "pod-56ab7a05-f83c-429f-b170-5260bd83f858": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027881458s
Dec 15 13:54:08.206: INFO: Pod "pod-56ab7a05-f83c-429f-b170-5260bd83f858": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041158456s
Dec 15 13:54:10.215: INFO: Pod "pod-56ab7a05-f83c-429f-b170-5260bd83f858": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050498686s
Dec 15 13:54:12.224: INFO: Pod "pod-56ab7a05-f83c-429f-b170-5260bd83f858": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059142266s
Dec 15 13:54:14.233: INFO: Pod "pod-56ab7a05-f83c-429f-b170-5260bd83f858": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068832167s
STEP: Saw pod success
Dec 15 13:54:14.233: INFO: Pod "pod-56ab7a05-f83c-429f-b170-5260bd83f858" satisfied condition "success or failure"
Dec 15 13:54:14.238: INFO: Trying to get logs from node iruya-node pod pod-56ab7a05-f83c-429f-b170-5260bd83f858 container test-container: 
STEP: delete the pod
Dec 15 13:54:14.281: INFO: Waiting for pod pod-56ab7a05-f83c-429f-b170-5260bd83f858 to disappear
Dec 15 13:54:14.302: INFO: Pod pod-56ab7a05-f83c-429f-b170-5260bd83f858 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 13:54:14.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6116" for this suite.
Dec 15 13:54:20.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 13:54:20.591: INFO: namespace emptydir-6116 deletion completed in 6.279055399s

• [SLOW TEST:16.554 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 13:54:20.593: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-556
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Dec 15 13:54:20.803: INFO: Found 0 stateful pods, waiting for 3
Dec 15 13:54:30.813: INFO: Found 2 stateful pods, waiting for 3
Dec 15 13:54:40.883: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 15 13:54:40.883: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 15 13:54:40.883: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 15 13:54:50.825: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 15 13:54:50.825: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 15 13:54:50.825: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Dec 15 13:54:50.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-556 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 15 13:54:51.465: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 15 13:54:51.465: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 15 13:54:51.465: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Dec 15 13:55:01.534: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Dec 15 13:55:11.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-556 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 15 13:55:11.994: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 15 13:55:11.994: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 15 13:55:11.994: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 15 13:55:22.050: INFO: Waiting for StatefulSet statefulset-556/ss2 to complete update
Dec 15 13:55:22.050: INFO: Waiting for Pod statefulset-556/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 15 13:55:22.050: INFO: Waiting for Pod statefulset-556/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 15 13:55:32.064: INFO: Waiting for StatefulSet statefulset-556/ss2 to complete update
Dec 15 13:55:32.064: INFO: Waiting for Pod statefulset-556/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 15 13:55:32.064: INFO: Waiting for Pod statefulset-556/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 15 13:55:42.408: INFO: Waiting for StatefulSet statefulset-556/ss2 to complete update
Dec 15 13:55:42.408: INFO: Waiting for Pod statefulset-556/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 15 13:55:52.067: INFO: Waiting for StatefulSet statefulset-556/ss2 to complete update
Dec 15 13:55:52.067: INFO: Waiting for Pod statefulset-556/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 15 13:56:02.087: INFO: Waiting for StatefulSet statefulset-556/ss2 to complete update
STEP: Rolling back to a previous revision
Dec 15 13:56:12.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-556 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 15 13:56:15.191: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 15 13:56:15.191: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 15 13:56:15.191: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 15 13:56:25.252: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Dec 15 13:56:35.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-556 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 15 13:56:35.715: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 15 13:56:35.715: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 15 13:56:35.715: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 15 13:56:45.780: INFO: Waiting for StatefulSet statefulset-556/ss2 to complete update
Dec 15 13:56:45.781: INFO: Waiting for Pod statefulset-556/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 15 13:56:45.781: INFO: Waiting for Pod statefulset-556/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 15 13:56:55.798: INFO: Waiting for StatefulSet statefulset-556/ss2 to complete update
Dec 15 13:56:55.798: INFO: Waiting for Pod statefulset-556/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 15 13:56:55.798: INFO: Waiting for Pod statefulset-556/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 15 13:57:05.798: INFO: Waiting for StatefulSet statefulset-556/ss2 to complete update
Dec 15 13:57:05.798: INFO: Waiting for Pod statefulset-556/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 15 13:57:15.797: INFO: Waiting for StatefulSet statefulset-556/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 15 13:57:25.806: INFO: Deleting all statefulset in ns statefulset-556
Dec 15 13:57:25.811: INFO: Scaling statefulset ss2 to 0
Dec 15 13:58:05.851: INFO: Waiting for statefulset status.replicas updated to 0
Dec 15 13:58:05.858: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 13:58:05.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-556" for this suite.
Dec 15 13:58:13.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 13:58:14.035: INFO: namespace statefulset-556 deletion completed in 8.121224754s

• [SLOW TEST:233.443 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 13:58:14.036: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-d18a7124-6a2b-4166-b5a9-857ea8941444 in namespace container-probe-1830
Dec 15 13:58:22.204: INFO: Started pod busybox-d18a7124-6a2b-4166-b5a9-857ea8941444 in namespace container-probe-1830
STEP: checking the pod's current state and verifying that restartCount is present
Dec 15 13:58:22.209: INFO: Initial restart count of pod busybox-d18a7124-6a2b-4166-b5a9-857ea8941444 is 0
Dec 15 13:59:12.872: INFO: Restart count of pod container-probe-1830/busybox-d18a7124-6a2b-4166-b5a9-857ea8941444 is now 1 (50.662833717s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 13:59:12.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1830" for this suite.
Dec 15 13:59:19.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 13:59:19.197: INFO: namespace container-probe-1830 deletion completed in 6.251005394s

• [SLOW TEST:65.162 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 13:59:19.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-92748059-42f2-4e3d-a65c-95c65fbb523c
STEP: Creating a pod to test consume configMaps
Dec 15 13:59:19.319: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cbef8f53-b3ee-4264-89f4-1588b45727cb" in namespace "projected-8802" to be "success or failure"
Dec 15 13:59:19.326: INFO: Pod "pod-projected-configmaps-cbef8f53-b3ee-4264-89f4-1588b45727cb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.478486ms
Dec 15 13:59:21.343: INFO: Pod "pod-projected-configmaps-cbef8f53-b3ee-4264-89f4-1588b45727cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023854368s
Dec 15 13:59:23.356: INFO: Pod "pod-projected-configmaps-cbef8f53-b3ee-4264-89f4-1588b45727cb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036512779s
Dec 15 13:59:25.367: INFO: Pod "pod-projected-configmaps-cbef8f53-b3ee-4264-89f4-1588b45727cb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048231722s
Dec 15 13:59:27.385: INFO: Pod "pod-projected-configmaps-cbef8f53-b3ee-4264-89f4-1588b45727cb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065894843s
Dec 15 13:59:29.393: INFO: Pod "pod-projected-configmaps-cbef8f53-b3ee-4264-89f4-1588b45727cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.074078021s
STEP: Saw pod success
Dec 15 13:59:29.393: INFO: Pod "pod-projected-configmaps-cbef8f53-b3ee-4264-89f4-1588b45727cb" satisfied condition "success or failure"
Dec 15 13:59:29.398: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-cbef8f53-b3ee-4264-89f4-1588b45727cb container projected-configmap-volume-test: 
STEP: delete the pod
Dec 15 13:59:29.500: INFO: Waiting for pod pod-projected-configmaps-cbef8f53-b3ee-4264-89f4-1588b45727cb to disappear
Dec 15 13:59:29.507: INFO: Pod pod-projected-configmaps-cbef8f53-b3ee-4264-89f4-1588b45727cb no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 13:59:29.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8802" for this suite.
Dec 15 13:59:35.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 13:59:35.712: INFO: namespace projected-8802 deletion completed in 6.19834769s

• [SLOW TEST:16.514 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 13:59:35.713: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 15 13:59:35.845: INFO: Waiting up to 5m0s for pod "pod-bddcb711-7882-4001-b057-ec745cd47fd8" in namespace "emptydir-89" to be "success or failure"
Dec 15 13:59:35.856: INFO: Pod "pod-bddcb711-7882-4001-b057-ec745cd47fd8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.688549ms
Dec 15 13:59:37.876: INFO: Pod "pod-bddcb711-7882-4001-b057-ec745cd47fd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030818807s
Dec 15 13:59:39.886: INFO: Pod "pod-bddcb711-7882-4001-b057-ec745cd47fd8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040133757s
Dec 15 13:59:41.894: INFO: Pod "pod-bddcb711-7882-4001-b057-ec745cd47fd8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048596857s
Dec 15 13:59:43.908: INFO: Pod "pod-bddcb711-7882-4001-b057-ec745cd47fd8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062691798s
Dec 15 13:59:45.918: INFO: Pod "pod-bddcb711-7882-4001-b057-ec745cd47fd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.072201658s
STEP: Saw pod success
Dec 15 13:59:45.918: INFO: Pod "pod-bddcb711-7882-4001-b057-ec745cd47fd8" satisfied condition "success or failure"
Dec 15 13:59:45.923: INFO: Trying to get logs from node iruya-node pod pod-bddcb711-7882-4001-b057-ec745cd47fd8 container test-container: 
STEP: delete the pod
Dec 15 13:59:46.120: INFO: Waiting for pod pod-bddcb711-7882-4001-b057-ec745cd47fd8 to disappear
Dec 15 13:59:46.172: INFO: Pod pod-bddcb711-7882-4001-b057-ec745cd47fd8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 13:59:46.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-89" for this suite.
Dec 15 13:59:52.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 13:59:52.288: INFO: namespace emptydir-89 deletion completed in 6.103336542s

• [SLOW TEST:16.576 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 13:59:52.289: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Dec 15 14:00:03.151: INFO: Successfully updated pod "labelsupdate813edaf4-6bfb-4cc0-90f4-36eec2348ae2"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:00:05.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9711" for this suite.
Dec 15 14:00:27.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:00:27.497: INFO: namespace downward-api-9711 deletion completed in 22.246671381s

• [SLOW TEST:35.208 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:00:27.498: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-4b585a43-9f77-4ea7-aff7-24501e8cbf8f
STEP: Creating a pod to test consume secrets
Dec 15 14:00:27.630: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-55c5abb7-ff87-4827-951d-0559dbcf8793" in namespace "projected-4615" to be "success or failure"
Dec 15 14:00:27.656: INFO: Pod "pod-projected-secrets-55c5abb7-ff87-4827-951d-0559dbcf8793": Phase="Pending", Reason="", readiness=false. Elapsed: 26.305514ms
Dec 15 14:00:29.665: INFO: Pod "pod-projected-secrets-55c5abb7-ff87-4827-951d-0559dbcf8793": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034543088s
Dec 15 14:00:31.710: INFO: Pod "pod-projected-secrets-55c5abb7-ff87-4827-951d-0559dbcf8793": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079858914s
Dec 15 14:00:33.721: INFO: Pod "pod-projected-secrets-55c5abb7-ff87-4827-951d-0559dbcf8793": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090578905s
Dec 15 14:00:35.793: INFO: Pod "pod-projected-secrets-55c5abb7-ff87-4827-951d-0559dbcf8793": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.162810928s
STEP: Saw pod success
Dec 15 14:00:35.793: INFO: Pod "pod-projected-secrets-55c5abb7-ff87-4827-951d-0559dbcf8793" satisfied condition "success or failure"
Dec 15 14:00:35.800: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-55c5abb7-ff87-4827-951d-0559dbcf8793 container projected-secret-volume-test: 
STEP: delete the pod
Dec 15 14:00:35.959: INFO: Waiting for pod pod-projected-secrets-55c5abb7-ff87-4827-951d-0559dbcf8793 to disappear
Dec 15 14:00:35.968: INFO: Pod pod-projected-secrets-55c5abb7-ff87-4827-951d-0559dbcf8793 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:00:35.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4615" for this suite.
Dec 15 14:00:41.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:00:42.161: INFO: namespace projected-4615 deletion completed in 6.185867288s

• [SLOW TEST:14.664 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:00:42.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Dec 15 14:00:42.312: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5433,SelfLink:/api/v1/namespaces/watch-5433/configmaps/e2e-watch-test-configmap-a,UID:0a3bc1d1-f533-489d-9a05-a43c53b0d2d0,ResourceVersion:16768102,Generation:0,CreationTimestamp:2019-12-15 14:00:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 15 14:00:42.313: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5433,SelfLink:/api/v1/namespaces/watch-5433/configmaps/e2e-watch-test-configmap-a,UID:0a3bc1d1-f533-489d-9a05-a43c53b0d2d0,ResourceVersion:16768102,Generation:0,CreationTimestamp:2019-12-15 14:00:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Dec 15 14:00:52.325: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5433,SelfLink:/api/v1/namespaces/watch-5433/configmaps/e2e-watch-test-configmap-a,UID:0a3bc1d1-f533-489d-9a05-a43c53b0d2d0,ResourceVersion:16768116,Generation:0,CreationTimestamp:2019-12-15 14:00:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Dec 15 14:00:52.325: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5433,SelfLink:/api/v1/namespaces/watch-5433/configmaps/e2e-watch-test-configmap-a,UID:0a3bc1d1-f533-489d-9a05-a43c53b0d2d0,ResourceVersion:16768116,Generation:0,CreationTimestamp:2019-12-15 14:00:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Dec 15 14:01:02.342: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5433,SelfLink:/api/v1/namespaces/watch-5433/configmaps/e2e-watch-test-configmap-a,UID:0a3bc1d1-f533-489d-9a05-a43c53b0d2d0,ResourceVersion:16768130,Generation:0,CreationTimestamp:2019-12-15 14:00:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 15 14:01:02.343: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5433,SelfLink:/api/v1/namespaces/watch-5433/configmaps/e2e-watch-test-configmap-a,UID:0a3bc1d1-f533-489d-9a05-a43c53b0d2d0,ResourceVersion:16768130,Generation:0,CreationTimestamp:2019-12-15 14:00:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Dec 15 14:01:12.368: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5433,SelfLink:/api/v1/namespaces/watch-5433/configmaps/e2e-watch-test-configmap-a,UID:0a3bc1d1-f533-489d-9a05-a43c53b0d2d0,ResourceVersion:16768144,Generation:0,CreationTimestamp:2019-12-15 14:00:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 15 14:01:12.369: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5433,SelfLink:/api/v1/namespaces/watch-5433/configmaps/e2e-watch-test-configmap-a,UID:0a3bc1d1-f533-489d-9a05-a43c53b0d2d0,ResourceVersion:16768144,Generation:0,CreationTimestamp:2019-12-15 14:00:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Dec 15 14:01:22.382: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-5433,SelfLink:/api/v1/namespaces/watch-5433/configmaps/e2e-watch-test-configmap-b,UID:d729c3b9-0697-4423-918e-e4320669f4ed,ResourceVersion:16768160,Generation:0,CreationTimestamp:2019-12-15 14:01:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 15 14:01:22.382: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-5433,SelfLink:/api/v1/namespaces/watch-5433/configmaps/e2e-watch-test-configmap-b,UID:d729c3b9-0697-4423-918e-e4320669f4ed,ResourceVersion:16768160,Generation:0,CreationTimestamp:2019-12-15 14:01:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Dec 15 14:01:32.479: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-5433,SelfLink:/api/v1/namespaces/watch-5433/configmaps/e2e-watch-test-configmap-b,UID:d729c3b9-0697-4423-918e-e4320669f4ed,ResourceVersion:16768174,Generation:0,CreationTimestamp:2019-12-15 14:01:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 15 14:01:32.480: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-5433,SelfLink:/api/v1/namespaces/watch-5433/configmaps/e2e-watch-test-configmap-b,UID:d729c3b9-0697-4423-918e-e4320669f4ed,ResourceVersion:16768174,Generation:0,CreationTimestamp:2019-12-15 14:01:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:01:42.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5433" for this suite.
Dec 15 14:01:48.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:01:48.723: INFO: namespace watch-5433 deletion completed in 6.208218192s

• [SLOW TEST:66.562 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:01:48.724: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-8b2d3010-99f5-4742-a85d-e199f7729137
STEP: Creating a pod to test consume secrets
Dec 15 14:01:48.984: INFO: Waiting up to 5m0s for pod "pod-secrets-50c600b1-840f-40c1-8a39-620da4a002e9" in namespace "secrets-2714" to be "success or failure"
Dec 15 14:01:48.988: INFO: Pod "pod-secrets-50c600b1-840f-40c1-8a39-620da4a002e9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.612433ms
Dec 15 14:01:50.996: INFO: Pod "pod-secrets-50c600b1-840f-40c1-8a39-620da4a002e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011708329s
Dec 15 14:01:53.003: INFO: Pod "pod-secrets-50c600b1-840f-40c1-8a39-620da4a002e9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019158341s
Dec 15 14:01:55.013: INFO: Pod "pod-secrets-50c600b1-840f-40c1-8a39-620da4a002e9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028997304s
Dec 15 14:01:57.019: INFO: Pod "pod-secrets-50c600b1-840f-40c1-8a39-620da4a002e9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.035059846s
Dec 15 14:01:59.053: INFO: Pod "pod-secrets-50c600b1-840f-40c1-8a39-620da4a002e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.069462718s
STEP: Saw pod success
Dec 15 14:01:59.054: INFO: Pod "pod-secrets-50c600b1-840f-40c1-8a39-620da4a002e9" satisfied condition "success or failure"
Dec 15 14:01:59.065: INFO: Trying to get logs from node iruya-node pod pod-secrets-50c600b1-840f-40c1-8a39-620da4a002e9 container secret-volume-test: 
STEP: delete the pod
Dec 15 14:01:59.211: INFO: Waiting for pod pod-secrets-50c600b1-840f-40c1-8a39-620da4a002e9 to disappear
Dec 15 14:01:59.216: INFO: Pod pod-secrets-50c600b1-840f-40c1-8a39-620da4a002e9 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:01:59.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2714" for this suite.
Dec 15 14:02:05.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:02:05.381: INFO: namespace secrets-2714 deletion completed in 6.160524456s
STEP: Destroying namespace "secret-namespace-3963" for this suite.
Dec 15 14:02:11.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:02:11.575: INFO: namespace secret-namespace-3963 deletion completed in 6.194168837s

• [SLOW TEST:22.852 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:02:11.576: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-47eb9833-81c5-48da-b39d-495afd0042db
STEP: Creating a pod to test consume secrets
Dec 15 14:02:11.697: INFO: Waiting up to 5m0s for pod "pod-secrets-a23b6337-5fb7-4b0e-810c-fc3b25bc3a64" in namespace "secrets-4031" to be "success or failure"
Dec 15 14:02:11.719: INFO: Pod "pod-secrets-a23b6337-5fb7-4b0e-810c-fc3b25bc3a64": Phase="Pending", Reason="", readiness=false. Elapsed: 21.513539ms
Dec 15 14:02:13.739: INFO: Pod "pod-secrets-a23b6337-5fb7-4b0e-810c-fc3b25bc3a64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041518401s
Dec 15 14:02:15.757: INFO: Pod "pod-secrets-a23b6337-5fb7-4b0e-810c-fc3b25bc3a64": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06001727s
Dec 15 14:02:17.771: INFO: Pod "pod-secrets-a23b6337-5fb7-4b0e-810c-fc3b25bc3a64": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073983671s
Dec 15 14:02:19.781: INFO: Pod "pod-secrets-a23b6337-5fb7-4b0e-810c-fc3b25bc3a64": Phase="Pending", Reason="", readiness=false. Elapsed: 8.083618051s
Dec 15 14:02:21.803: INFO: Pod "pod-secrets-a23b6337-5fb7-4b0e-810c-fc3b25bc3a64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.105443925s
STEP: Saw pod success
Dec 15 14:02:21.803: INFO: Pod "pod-secrets-a23b6337-5fb7-4b0e-810c-fc3b25bc3a64" satisfied condition "success or failure"
Dec 15 14:02:21.815: INFO: Trying to get logs from node iruya-node pod pod-secrets-a23b6337-5fb7-4b0e-810c-fc3b25bc3a64 container secret-volume-test: 
STEP: delete the pod
Dec 15 14:02:22.623: INFO: Waiting for pod pod-secrets-a23b6337-5fb7-4b0e-810c-fc3b25bc3a64 to disappear
Dec 15 14:02:22.643: INFO: Pod pod-secrets-a23b6337-5fb7-4b0e-810c-fc3b25bc3a64 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:02:22.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4031" for this suite.
Dec 15 14:02:28.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:02:28.971: INFO: namespace secrets-4031 deletion completed in 6.307453273s

• [SLOW TEST:17.395 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:02:28.973: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 15 14:02:29.069: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2d537a99-8dad-43fc-ae68-eba6a666ecff" in namespace "projected-8549" to be "success or failure"
Dec 15 14:02:29.128: INFO: Pod "downwardapi-volume-2d537a99-8dad-43fc-ae68-eba6a666ecff": Phase="Pending", Reason="", readiness=false. Elapsed: 58.41647ms
Dec 15 14:02:31.141: INFO: Pod "downwardapi-volume-2d537a99-8dad-43fc-ae68-eba6a666ecff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07181225s
Dec 15 14:02:33.151: INFO: Pod "downwardapi-volume-2d537a99-8dad-43fc-ae68-eba6a666ecff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081170134s
Dec 15 14:02:35.161: INFO: Pod "downwardapi-volume-2d537a99-8dad-43fc-ae68-eba6a666ecff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091424535s
Dec 15 14:02:37.186: INFO: Pod "downwardapi-volume-2d537a99-8dad-43fc-ae68-eba6a666ecff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.116607853s
STEP: Saw pod success
Dec 15 14:02:37.186: INFO: Pod "downwardapi-volume-2d537a99-8dad-43fc-ae68-eba6a666ecff" satisfied condition "success or failure"
Dec 15 14:02:37.192: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-2d537a99-8dad-43fc-ae68-eba6a666ecff container client-container: 
STEP: delete the pod
Dec 15 14:02:37.286: INFO: Waiting for pod downwardapi-volume-2d537a99-8dad-43fc-ae68-eba6a666ecff to disappear
Dec 15 14:02:37.345: INFO: Pod downwardapi-volume-2d537a99-8dad-43fc-ae68-eba6a666ecff no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:02:37.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8549" for this suite.
Dec 15 14:02:43.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:02:43.513: INFO: namespace projected-8549 deletion completed in 6.154641519s

• [SLOW TEST:14.541 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:02:43.514: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 15 14:02:43.655: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3145bab3-522c-435a-af83-db00cfa82da3" in namespace "projected-3451" to be "success or failure"
Dec 15 14:02:43.671: INFO: Pod "downwardapi-volume-3145bab3-522c-435a-af83-db00cfa82da3": Phase="Pending", Reason="", readiness=false. Elapsed: 16.258603ms
Dec 15 14:02:45.682: INFO: Pod "downwardapi-volume-3145bab3-522c-435a-af83-db00cfa82da3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026621356s
Dec 15 14:02:47.695: INFO: Pod "downwardapi-volume-3145bab3-522c-435a-af83-db00cfa82da3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039426726s
Dec 15 14:02:49.703: INFO: Pod "downwardapi-volume-3145bab3-522c-435a-af83-db00cfa82da3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047578653s
Dec 15 14:02:51.712: INFO: Pod "downwardapi-volume-3145bab3-522c-435a-af83-db00cfa82da3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057109551s
Dec 15 14:02:53.727: INFO: Pod "downwardapi-volume-3145bab3-522c-435a-af83-db00cfa82da3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.071288501s
STEP: Saw pod success
Dec 15 14:02:53.727: INFO: Pod "downwardapi-volume-3145bab3-522c-435a-af83-db00cfa82da3" satisfied condition "success or failure"
Dec 15 14:02:53.733: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-3145bab3-522c-435a-af83-db00cfa82da3 container client-container: 
STEP: delete the pod
Dec 15 14:02:53.856: INFO: Waiting for pod downwardapi-volume-3145bab3-522c-435a-af83-db00cfa82da3 to disappear
Dec 15 14:02:54.068: INFO: Pod downwardapi-volume-3145bab3-522c-435a-af83-db00cfa82da3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:02:54.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3451" for this suite.
Dec 15 14:03:00.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:03:00.327: INFO: namespace projected-3451 deletion completed in 6.235267498s

• [SLOW TEST:16.814 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:03:00.328: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-8t7x
STEP: Creating a pod to test atomic-volume-subpath
Dec 15 14:03:00.439: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-8t7x" in namespace "subpath-8322" to be "success or failure"
Dec 15 14:03:00.447: INFO: Pod "pod-subpath-test-configmap-8t7x": Phase="Pending", Reason="", readiness=false. Elapsed: 7.958706ms
Dec 15 14:03:02.465: INFO: Pod "pod-subpath-test-configmap-8t7x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025590388s
Dec 15 14:03:04.479: INFO: Pod "pod-subpath-test-configmap-8t7x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039527162s
Dec 15 14:03:06.489: INFO: Pod "pod-subpath-test-configmap-8t7x": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050012499s
Dec 15 14:03:08.504: INFO: Pod "pod-subpath-test-configmap-8t7x": Phase="Running", Reason="", readiness=true. Elapsed: 8.064618885s
Dec 15 14:03:10.522: INFO: Pod "pod-subpath-test-configmap-8t7x": Phase="Running", Reason="", readiness=true. Elapsed: 10.082496979s
Dec 15 14:03:12.545: INFO: Pod "pod-subpath-test-configmap-8t7x": Phase="Running", Reason="", readiness=true. Elapsed: 12.105549626s
Dec 15 14:03:14.565: INFO: Pod "pod-subpath-test-configmap-8t7x": Phase="Running", Reason="", readiness=true. Elapsed: 14.126019127s
Dec 15 14:03:16.585: INFO: Pod "pod-subpath-test-configmap-8t7x": Phase="Running", Reason="", readiness=true. Elapsed: 16.145796997s
Dec 15 14:03:18.627: INFO: Pod "pod-subpath-test-configmap-8t7x": Phase="Running", Reason="", readiness=true. Elapsed: 18.187459155s
Dec 15 14:03:20.647: INFO: Pod "pod-subpath-test-configmap-8t7x": Phase="Running", Reason="", readiness=true. Elapsed: 20.207178841s
Dec 15 14:03:22.664: INFO: Pod "pod-subpath-test-configmap-8t7x": Phase="Running", Reason="", readiness=true. Elapsed: 22.224864002s
Dec 15 14:03:24.674: INFO: Pod "pod-subpath-test-configmap-8t7x": Phase="Running", Reason="", readiness=true. Elapsed: 24.235014692s
Dec 15 14:03:26.687: INFO: Pod "pod-subpath-test-configmap-8t7x": Phase="Running", Reason="", readiness=true. Elapsed: 26.247316528s
Dec 15 14:03:28.694: INFO: Pod "pod-subpath-test-configmap-8t7x": Phase="Running", Reason="", readiness=true. Elapsed: 28.254490864s
Dec 15 14:03:30.700: INFO: Pod "pod-subpath-test-configmap-8t7x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.260149664s
STEP: Saw pod success
Dec 15 14:03:30.700: INFO: Pod "pod-subpath-test-configmap-8t7x" satisfied condition "success or failure"
Dec 15 14:03:30.704: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-8t7x container test-container-subpath-configmap-8t7x: 
STEP: delete the pod
Dec 15 14:03:30.766: INFO: Waiting for pod pod-subpath-test-configmap-8t7x to disappear
Dec 15 14:03:30.775: INFO: Pod pod-subpath-test-configmap-8t7x no longer exists
STEP: Deleting pod pod-subpath-test-configmap-8t7x
Dec 15 14:03:30.775: INFO: Deleting pod "pod-subpath-test-configmap-8t7x" in namespace "subpath-8322"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:03:30.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8322" for this suite.
Dec 15 14:03:36.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:03:36.950: INFO: namespace subpath-8322 deletion completed in 6.134084007s

• [SLOW TEST:36.622 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:03:36.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-2392
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-2392
STEP: Deleting pre-stop pod
Dec 15 14:04:00.189: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:04:00.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-2392" for this suite.
Dec 15 14:04:38.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:04:38.473: INFO: namespace prestop-2392 deletion completed in 38.212175146s

• [SLOW TEST:61.523 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:04:38.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-89e7c42b-59a7-4018-a873-410fee91a989
STEP: Creating a pod to test consume configMaps
Dec 15 14:04:38.659: INFO: Waiting up to 5m0s for pod "pod-configmaps-f30b75df-e960-4ae3-ac44-ac56fd939ab6" in namespace "configmap-1647" to be "success or failure"
Dec 15 14:04:38.672: INFO: Pod "pod-configmaps-f30b75df-e960-4ae3-ac44-ac56fd939ab6": Phase="Pending", Reason="", readiness=false. Elapsed: 13.480874ms
Dec 15 14:04:40.687: INFO: Pod "pod-configmaps-f30b75df-e960-4ae3-ac44-ac56fd939ab6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027579828s
Dec 15 14:04:42.696: INFO: Pod "pod-configmaps-f30b75df-e960-4ae3-ac44-ac56fd939ab6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037050398s
Dec 15 14:04:44.715: INFO: Pod "pod-configmaps-f30b75df-e960-4ae3-ac44-ac56fd939ab6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056008156s
Dec 15 14:04:46.725: INFO: Pod "pod-configmaps-f30b75df-e960-4ae3-ac44-ac56fd939ab6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066147863s
Dec 15 14:04:48.733: INFO: Pod "pod-configmaps-f30b75df-e960-4ae3-ac44-ac56fd939ab6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.073963269s
STEP: Saw pod success
Dec 15 14:04:48.733: INFO: Pod "pod-configmaps-f30b75df-e960-4ae3-ac44-ac56fd939ab6" satisfied condition "success or failure"
Dec 15 14:04:48.737: INFO: Trying to get logs from node iruya-node pod pod-configmaps-f30b75df-e960-4ae3-ac44-ac56fd939ab6 container configmap-volume-test: 
STEP: delete the pod
Dec 15 14:04:48.856: INFO: Waiting for pod pod-configmaps-f30b75df-e960-4ae3-ac44-ac56fd939ab6 to disappear
Dec 15 14:04:48.868: INFO: Pod pod-configmaps-f30b75df-e960-4ae3-ac44-ac56fd939ab6 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:04:48.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1647" for this suite.
Dec 15 14:04:54.908: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:04:55.019: INFO: namespace configmap-1647 deletion completed in 6.139517224s

• [SLOW TEST:16.546 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:04:55.020: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 15 14:04:55.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:05:05.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8601" for this suite.
Dec 15 14:06:07.337: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:06:07.508: INFO: namespace pods-8601 deletion completed in 1m2.208383356s

• [SLOW TEST:72.489 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:06:07.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-bf2d5b49-b8d3-45ad-b86d-d6e59972ce53
STEP: Creating a pod to test consume configMaps
Dec 15 14:06:07.707: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d8bf3671-e617-41f1-aed7-8db8ae7f158a" in namespace "projected-116" to be "success or failure"
Dec 15 14:06:07.725: INFO: Pod "pod-projected-configmaps-d8bf3671-e617-41f1-aed7-8db8ae7f158a": Phase="Pending", Reason="", readiness=false. Elapsed: 17.244647ms
Dec 15 14:06:09.733: INFO: Pod "pod-projected-configmaps-d8bf3671-e617-41f1-aed7-8db8ae7f158a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025162462s
Dec 15 14:06:11.742: INFO: Pod "pod-projected-configmaps-d8bf3671-e617-41f1-aed7-8db8ae7f158a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034824645s
Dec 15 14:06:13.757: INFO: Pod "pod-projected-configmaps-d8bf3671-e617-41f1-aed7-8db8ae7f158a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049990217s
Dec 15 14:06:15.842: INFO: Pod "pod-projected-configmaps-d8bf3671-e617-41f1-aed7-8db8ae7f158a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.134456885s
Dec 15 14:06:17.857: INFO: Pod "pod-projected-configmaps-d8bf3671-e617-41f1-aed7-8db8ae7f158a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.149563861s
STEP: Saw pod success
Dec 15 14:06:17.857: INFO: Pod "pod-projected-configmaps-d8bf3671-e617-41f1-aed7-8db8ae7f158a" satisfied condition "success or failure"
Dec 15 14:06:17.863: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-d8bf3671-e617-41f1-aed7-8db8ae7f158a container projected-configmap-volume-test: 
STEP: delete the pod
Dec 15 14:06:17.997: INFO: Waiting for pod pod-projected-configmaps-d8bf3671-e617-41f1-aed7-8db8ae7f158a to disappear
Dec 15 14:06:18.041: INFO: Pod pod-projected-configmaps-d8bf3671-e617-41f1-aed7-8db8ae7f158a no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:06:18.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-116" for this suite.
Dec 15 14:06:24.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:06:24.611: INFO: namespace projected-116 deletion completed in 6.559787256s

• [SLOW TEST:17.102 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:06:24.611: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Dec 15 14:06:24.797: INFO: Waiting up to 5m0s for pod "client-containers-5f95401b-e786-4664-b514-2fde6e6dd515" in namespace "containers-3822" to be "success or failure"
Dec 15 14:06:24.807: INFO: Pod "client-containers-5f95401b-e786-4664-b514-2fde6e6dd515": Phase="Pending", Reason="", readiness=false. Elapsed: 10.033031ms
Dec 15 14:06:26.818: INFO: Pod "client-containers-5f95401b-e786-4664-b514-2fde6e6dd515": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020990423s
Dec 15 14:06:28.837: INFO: Pod "client-containers-5f95401b-e786-4664-b514-2fde6e6dd515": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039718156s
Dec 15 14:06:30.849: INFO: Pod "client-containers-5f95401b-e786-4664-b514-2fde6e6dd515": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05224363s
Dec 15 14:06:32.862: INFO: Pod "client-containers-5f95401b-e786-4664-b514-2fde6e6dd515": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.064677368s
STEP: Saw pod success
Dec 15 14:06:32.862: INFO: Pod "client-containers-5f95401b-e786-4664-b514-2fde6e6dd515" satisfied condition "success or failure"
Dec 15 14:06:32.867: INFO: Trying to get logs from node iruya-node pod client-containers-5f95401b-e786-4664-b514-2fde6e6dd515 container test-container: 
STEP: delete the pod
Dec 15 14:06:32.919: INFO: Waiting for pod client-containers-5f95401b-e786-4664-b514-2fde6e6dd515 to disappear
Dec 15 14:06:32.948: INFO: Pod client-containers-5f95401b-e786-4664-b514-2fde6e6dd515 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:06:32.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3822" for this suite.
Dec 15 14:06:39.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:06:39.176: INFO: namespace containers-3822 deletion completed in 6.206938139s

• [SLOW TEST:14.565 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:06:39.177: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Dec 15 14:06:39.244: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 15 14:06:39.299: INFO: Waiting for terminating namespaces to be deleted...
Dec 15 14:06:39.302: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Dec 15 14:06:39.317: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Dec 15 14:06:39.317: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 15 14:06:39.317: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Dec 15 14:06:39.317: INFO: 	Container weave ready: true, restart count 0
Dec 15 14:06:39.317: INFO: 	Container weave-npc ready: true, restart count 0
Dec 15 14:06:39.317: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Dec 15 14:06:39.331: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Dec 15 14:06:39.332: INFO: 	Container weave ready: true, restart count 0
Dec 15 14:06:39.332: INFO: 	Container weave-npc ready: true, restart count 0
Dec 15 14:06:39.332: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 15 14:06:39.332: INFO: 	Container coredns ready: true, restart count 0
Dec 15 14:06:39.332: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Dec 15 14:06:39.332: INFO: 	Container etcd ready: true, restart count 0
Dec 15 14:06:39.332: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Dec 15 14:06:39.332: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 15 14:06:39.332: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Dec 15 14:06:39.332: INFO: 	Container kube-controller-manager ready: true, restart count 10
Dec 15 14:06:39.332: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Dec 15 14:06:39.332: INFO: 	Container kube-apiserver ready: true, restart count 0
Dec 15 14:06:39.332: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 15 14:06:39.332: INFO: 	Container coredns ready: true, restart count 0
Dec 15 14:06:39.332: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Dec 15 14:06:39.332: INFO: 	Container kube-scheduler ready: true, restart count 7
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-node
STEP: verifying the node has the label node iruya-server-sfge57q7djm7
Dec 15 14:06:39.444: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Dec 15 14:06:39.444: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Dec 15 14:06:39.444: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Dec 15 14:06:39.444: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7
Dec 15 14:06:39.444: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7
Dec 15 14:06:39.444: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Dec 15 14:06:39.444: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node
Dec 15 14:06:39.444: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Dec 15 14:06:39.444: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7
Dec 15 14:06:39.444: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-8d5684af-8b05-4faf-bf85-4e316c5416b6.15e0908bbf8c9cb2], Reason = [Scheduled], Message = [Successfully assigned sched-pred-958/filler-pod-8d5684af-8b05-4faf-bf85-4e316c5416b6 to iruya-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-8d5684af-8b05-4faf-bf85-4e316c5416b6.15e0908cedaea9ba], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-8d5684af-8b05-4faf-bf85-4e316c5416b6.15e0908dc2d13d1a], Reason = [Created], Message = [Created container filler-pod-8d5684af-8b05-4faf-bf85-4e316c5416b6]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-8d5684af-8b05-4faf-bf85-4e316c5416b6.15e0908de9a6fd5e], Reason = [Started], Message = [Started container filler-pod-8d5684af-8b05-4faf-bf85-4e316c5416b6]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-f875fbe0-d6b8-472e-82ac-f41b2f64144b.15e0908bc124c8de], Reason = [Scheduled], Message = [Successfully assigned sched-pred-958/filler-pod-f875fbe0-d6b8-472e-82ac-f41b2f64144b to iruya-server-sfge57q7djm7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-f875fbe0-d6b8-472e-82ac-f41b2f64144b.15e0908caf769cf3], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-f875fbe0-d6b8-472e-82ac-f41b2f64144b.15e0908d5a16f8d7], Reason = [Created], Message = [Created container filler-pod-f875fbe0-d6b8-472e-82ac-f41b2f64144b]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-f875fbe0-d6b8-472e-82ac-f41b2f64144b.15e0908d81a7dd07], Reason = [Started], Message = [Started container filler-pod-f875fbe0-d6b8-472e-82ac-f41b2f64144b]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15e0908e8ed5100d], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-server-sfge57q7djm7
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:06:52.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-958" for this suite.
Dec 15 14:06:58.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:06:58.837: INFO: namespace sched-pred-958 deletion completed in 6.136433528s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:19.660 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:06:58.838: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Dec 15 14:06:59.852: INFO: namespace kubectl-5665
Dec 15 14:06:59.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5665'
Dec 15 14:07:03.680: INFO: stderr: ""
Dec 15 14:07:03.681: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 15 14:07:04.690: INFO: Selector matched 1 pods for map[app:redis]
Dec 15 14:07:04.690: INFO: Found 0 / 1
Dec 15 14:07:05.690: INFO: Selector matched 1 pods for map[app:redis]
Dec 15 14:07:05.690: INFO: Found 0 / 1
Dec 15 14:07:06.690: INFO: Selector matched 1 pods for map[app:redis]
Dec 15 14:07:06.690: INFO: Found 0 / 1
Dec 15 14:07:07.695: INFO: Selector matched 1 pods for map[app:redis]
Dec 15 14:07:07.695: INFO: Found 0 / 1
Dec 15 14:07:08.690: INFO: Selector matched 1 pods for map[app:redis]
Dec 15 14:07:08.690: INFO: Found 0 / 1
Dec 15 14:07:09.690: INFO: Selector matched 1 pods for map[app:redis]
Dec 15 14:07:09.690: INFO: Found 0 / 1
Dec 15 14:07:10.705: INFO: Selector matched 1 pods for map[app:redis]
Dec 15 14:07:10.705: INFO: Found 0 / 1
Dec 15 14:07:11.692: INFO: Selector matched 1 pods for map[app:redis]
Dec 15 14:07:11.692: INFO: Found 0 / 1
Dec 15 14:07:12.692: INFO: Selector matched 1 pods for map[app:redis]
Dec 15 14:07:12.692: INFO: Found 1 / 1
Dec 15 14:07:12.693: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 15 14:07:12.699: INFO: Selector matched 1 pods for map[app:redis]
Dec 15 14:07:12.699: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 15 14:07:12.699: INFO: wait on redis-master startup in kubectl-5665 
Dec 15 14:07:12.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-ftsxl redis-master --namespace=kubectl-5665'
Dec 15 14:07:12.872: INFO: stderr: ""
Dec 15 14:07:12.872: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 15 Dec 14:07:10.579 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 15 Dec 14:07:10.585 # Server started, Redis version 3.2.12\n1:M 15 Dec 14:07:10.586 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 15 Dec 14:07:10.586 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Dec 15 14:07:12.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-5665'
Dec 15 14:07:13.080: INFO: stderr: ""
Dec 15 14:07:13.080: INFO: stdout: "service/rm2 exposed\n"
Dec 15 14:07:13.154: INFO: Service rm2 in namespace kubectl-5665 found.
STEP: exposing service
Dec 15 14:07:15.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-5665'
Dec 15 14:07:15.495: INFO: stderr: ""
Dec 15 14:07:15.495: INFO: stdout: "service/rm3 exposed\n"
Dec 15 14:07:15.505: INFO: Service rm3 in namespace kubectl-5665 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:07:17.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5665" for this suite.
Dec 15 14:07:39.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:07:39.716: INFO: namespace kubectl-5665 deletion completed in 22.185702713s

• [SLOW TEST:40.878 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:07:39.717: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-qmzg
STEP: Creating a pod to test atomic-volume-subpath
Dec 15 14:07:39.870: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-qmzg" in namespace "subpath-4037" to be "success or failure"
Dec 15 14:07:39.876: INFO: Pod "pod-subpath-test-secret-qmzg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.221636ms
Dec 15 14:07:41.905: INFO: Pod "pod-subpath-test-secret-qmzg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034760423s
Dec 15 14:07:43.923: INFO: Pod "pod-subpath-test-secret-qmzg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052970897s
Dec 15 14:07:45.934: INFO: Pod "pod-subpath-test-secret-qmzg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064289962s
Dec 15 14:07:47.944: INFO: Pod "pod-subpath-test-secret-qmzg": Phase="Running", Reason="", readiness=true. Elapsed: 8.073729818s
Dec 15 14:07:49.951: INFO: Pod "pod-subpath-test-secret-qmzg": Phase="Running", Reason="", readiness=true. Elapsed: 10.081457008s
Dec 15 14:07:51.960: INFO: Pod "pod-subpath-test-secret-qmzg": Phase="Running", Reason="", readiness=true. Elapsed: 12.090226996s
Dec 15 14:07:53.969: INFO: Pod "pod-subpath-test-secret-qmzg": Phase="Running", Reason="", readiness=true. Elapsed: 14.099341173s
Dec 15 14:07:55.981: INFO: Pod "pod-subpath-test-secret-qmzg": Phase="Running", Reason="", readiness=true. Elapsed: 16.111158716s
Dec 15 14:07:57.996: INFO: Pod "pod-subpath-test-secret-qmzg": Phase="Running", Reason="", readiness=true. Elapsed: 18.126454347s
Dec 15 14:08:00.004: INFO: Pod "pod-subpath-test-secret-qmzg": Phase="Running", Reason="", readiness=true. Elapsed: 20.133877467s
Dec 15 14:08:02.013: INFO: Pod "pod-subpath-test-secret-qmzg": Phase="Running", Reason="", readiness=true. Elapsed: 22.143662614s
Dec 15 14:08:04.024: INFO: Pod "pod-subpath-test-secret-qmzg": Phase="Running", Reason="", readiness=true. Elapsed: 24.154606237s
Dec 15 14:08:06.034: INFO: Pod "pod-subpath-test-secret-qmzg": Phase="Running", Reason="", readiness=true. Elapsed: 26.164622788s
Dec 15 14:08:08.046: INFO: Pod "pod-subpath-test-secret-qmzg": Phase="Running", Reason="", readiness=true. Elapsed: 28.176190558s
Dec 15 14:08:10.056: INFO: Pod "pod-subpath-test-secret-qmzg": Phase="Running", Reason="", readiness=true. Elapsed: 30.185896387s
Dec 15 14:08:12.066: INFO: Pod "pod-subpath-test-secret-qmzg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.196246545s
STEP: Saw pod success
Dec 15 14:08:12.066: INFO: Pod "pod-subpath-test-secret-qmzg" satisfied condition "success or failure"
Dec 15 14:08:12.070: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-qmzg container test-container-subpath-secret-qmzg: 
STEP: delete the pod
Dec 15 14:08:12.343: INFO: Waiting for pod pod-subpath-test-secret-qmzg to disappear
Dec 15 14:08:12.360: INFO: Pod pod-subpath-test-secret-qmzg no longer exists
STEP: Deleting pod pod-subpath-test-secret-qmzg
Dec 15 14:08:12.361: INFO: Deleting pod "pod-subpath-test-secret-qmzg" in namespace "subpath-4037"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:08:12.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4037" for this suite.
Dec 15 14:08:18.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:08:18.651: INFO: namespace subpath-4037 deletion completed in 6.278488775s

• [SLOW TEST:38.934 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:08:18.652: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-7659
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7659 to expose endpoints map[]
Dec 15 14:08:19.008: INFO: Get endpoints failed (14.403349ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Dec 15 14:08:20.014: INFO: successfully validated that service multi-endpoint-test in namespace services-7659 exposes endpoints map[] (1.020398197s elapsed)
STEP: Creating pod pod1 in namespace services-7659
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7659 to expose endpoints map[pod1:[100]]
Dec 15 14:08:24.204: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.180342343s elapsed, will retry)
Dec 15 14:08:28.295: INFO: successfully validated that service multi-endpoint-test in namespace services-7659 exposes endpoints map[pod1:[100]] (8.271350065s elapsed)
STEP: Creating pod pod2 in namespace services-7659
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7659 to expose endpoints map[pod1:[100] pod2:[101]]
Dec 15 14:08:33.602: INFO: Unexpected endpoints: found map[1e4aa561-eae8-43e8-a093-b8141c97eae7:[100]], expected map[pod1:[100] pod2:[101]] (5.300576873s elapsed, will retry)
Dec 15 14:08:35.646: INFO: successfully validated that service multi-endpoint-test in namespace services-7659 exposes endpoints map[pod1:[100] pod2:[101]] (7.344058652s elapsed)
STEP: Deleting pod pod1 in namespace services-7659
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7659 to expose endpoints map[pod2:[101]]
Dec 15 14:08:35.687: INFO: successfully validated that service multi-endpoint-test in namespace services-7659 exposes endpoints map[pod2:[101]] (29.718208ms elapsed)
STEP: Deleting pod pod2 in namespace services-7659
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7659 to expose endpoints map[]
Dec 15 14:08:35.739: INFO: successfully validated that service multi-endpoint-test in namespace services-7659 exposes endpoints map[] (16.763587ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:08:35.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7659" for this suite.
Dec 15 14:08:57.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:08:58.020: INFO: namespace services-7659 deletion completed in 22.164876607s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:39.368 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:08:58.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Dec 15 14:09:07.221: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:09:07.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5041" for this suite.
Dec 15 14:09:13.373: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:09:13.537: INFO: namespace container-runtime-5041 deletion completed in 6.211619188s

• [SLOW TEST:15.516 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:09:13.538: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Dec 15 14:09:13.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6056'
Dec 15 14:09:14.260: INFO: stderr: ""
Dec 15 14:09:14.260: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 15 14:09:14.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6056'
Dec 15 14:09:14.434: INFO: stderr: ""
Dec 15 14:09:14.434: INFO: stdout: "update-demo-nautilus-gmxlm update-demo-nautilus-m7sqs "
Dec 15 14:09:14.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gmxlm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6056'
Dec 15 14:09:14.680: INFO: stderr: ""
Dec 15 14:09:14.680: INFO: stdout: ""
Dec 15 14:09:14.680: INFO: update-demo-nautilus-gmxlm is created but not running
Dec 15 14:09:19.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6056'
Dec 15 14:09:19.839: INFO: stderr: ""
Dec 15 14:09:19.839: INFO: stdout: "update-demo-nautilus-gmxlm update-demo-nautilus-m7sqs "
Dec 15 14:09:19.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gmxlm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6056'
Dec 15 14:09:19.958: INFO: stderr: ""
Dec 15 14:09:19.959: INFO: stdout: ""
Dec 15 14:09:19.959: INFO: update-demo-nautilus-gmxlm is created but not running
Dec 15 14:09:24.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6056'
Dec 15 14:09:25.138: INFO: stderr: ""
Dec 15 14:09:25.138: INFO: stdout: "update-demo-nautilus-gmxlm update-demo-nautilus-m7sqs "
Dec 15 14:09:25.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gmxlm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6056'
Dec 15 14:09:25.266: INFO: stderr: ""
Dec 15 14:09:25.266: INFO: stdout: "true"
Dec 15 14:09:25.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gmxlm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6056'
Dec 15 14:09:25.383: INFO: stderr: ""
Dec 15 14:09:25.384: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 15 14:09:25.384: INFO: validating pod update-demo-nautilus-gmxlm
Dec 15 14:09:25.400: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 15 14:09:25.400: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 15 14:09:25.400: INFO: update-demo-nautilus-gmxlm is verified up and running
Dec 15 14:09:25.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m7sqs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6056'
Dec 15 14:09:25.516: INFO: stderr: ""
Dec 15 14:09:25.517: INFO: stdout: "true"
Dec 15 14:09:25.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m7sqs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6056'
Dec 15 14:09:25.604: INFO: stderr: ""
Dec 15 14:09:25.604: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 15 14:09:25.604: INFO: validating pod update-demo-nautilus-m7sqs
Dec 15 14:09:25.610: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 15 14:09:25.610: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 15 14:09:25.610: INFO: update-demo-nautilus-m7sqs is verified up and running
STEP: using delete to clean up resources
Dec 15 14:09:25.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6056'
Dec 15 14:09:25.700: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 15 14:09:25.700: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 15 14:09:25.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6056'
Dec 15 14:09:25.814: INFO: stderr: "No resources found.\n"
Dec 15 14:09:25.814: INFO: stdout: ""
Dec 15 14:09:25.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6056 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 15 14:09:25.921: INFO: stderr: ""
Dec 15 14:09:25.921: INFO: stdout: "update-demo-nautilus-gmxlm\nupdate-demo-nautilus-m7sqs\n"
Dec 15 14:09:26.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6056'
Dec 15 14:09:26.856: INFO: stderr: "No resources found.\n"
Dec 15 14:09:26.857: INFO: stdout: ""
Dec 15 14:09:26.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6056 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 15 14:09:28.018: INFO: stderr: ""
Dec 15 14:09:28.019: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:09:28.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6056" for this suite.
Dec 15 14:09:34.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:09:34.541: INFO: namespace kubectl-6056 deletion completed in 6.515280241s

• [SLOW TEST:21.003 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:09:34.542: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-58e99aac-5044-4a03-b10b-cda0f2eb7492
STEP: Creating a pod to test consume configMaps
Dec 15 14:09:34.761: INFO: Waiting up to 5m0s for pod "pod-configmaps-1889a209-5ec2-4c4f-97dc-e5a4dd796093" in namespace "configmap-6068" to be "success or failure"
Dec 15 14:09:34.788: INFO: Pod "pod-configmaps-1889a209-5ec2-4c4f-97dc-e5a4dd796093": Phase="Pending", Reason="", readiness=false. Elapsed: 27.09241ms
Dec 15 14:09:36.797: INFO: Pod "pod-configmaps-1889a209-5ec2-4c4f-97dc-e5a4dd796093": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03601171s
Dec 15 14:09:38.811: INFO: Pod "pod-configmaps-1889a209-5ec2-4c4f-97dc-e5a4dd796093": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049357052s
Dec 15 14:09:40.835: INFO: Pod "pod-configmaps-1889a209-5ec2-4c4f-97dc-e5a4dd796093": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074069621s
Dec 15 14:09:42.853: INFO: Pod "pod-configmaps-1889a209-5ec2-4c4f-97dc-e5a4dd796093": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.092103607s
STEP: Saw pod success
Dec 15 14:09:42.854: INFO: Pod "pod-configmaps-1889a209-5ec2-4c4f-97dc-e5a4dd796093" satisfied condition "success or failure"
Dec 15 14:09:42.860: INFO: Trying to get logs from node iruya-node pod pod-configmaps-1889a209-5ec2-4c4f-97dc-e5a4dd796093 container configmap-volume-test: 
STEP: delete the pod
Dec 15 14:09:42.943: INFO: Waiting for pod pod-configmaps-1889a209-5ec2-4c4f-97dc-e5a4dd796093 to disappear
Dec 15 14:09:43.090: INFO: Pod pod-configmaps-1889a209-5ec2-4c4f-97dc-e5a4dd796093 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:09:43.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6068" for this suite.
Dec 15 14:09:49.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:09:49.339: INFO: namespace configmap-6068 deletion completed in 6.242702626s

• [SLOW TEST:14.797 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:09:49.340: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:09:49.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1835" for this suite.
Dec 15 14:09:55.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:09:55.788: INFO: namespace kubelet-test-1835 deletion completed in 6.232457733s

• [SLOW TEST:6.449 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:09:55.789: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 15 14:09:55.892: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d2a94803-bc5e-4263-af91-73bc913a775a" in namespace "downward-api-5866" to be "success or failure"
Dec 15 14:09:55.952: INFO: Pod "downwardapi-volume-d2a94803-bc5e-4263-af91-73bc913a775a": Phase="Pending", Reason="", readiness=false. Elapsed: 58.927364ms
Dec 15 14:09:57.961: INFO: Pod "downwardapi-volume-d2a94803-bc5e-4263-af91-73bc913a775a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067938916s
Dec 15 14:09:59.978: INFO: Pod "downwardapi-volume-d2a94803-bc5e-4263-af91-73bc913a775a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085154377s
Dec 15 14:10:02.008: INFO: Pod "downwardapi-volume-d2a94803-bc5e-4263-af91-73bc913a775a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115334883s
Dec 15 14:10:04.014: INFO: Pod "downwardapi-volume-d2a94803-bc5e-4263-af91-73bc913a775a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.121319596s
Dec 15 14:10:06.021: INFO: Pod "downwardapi-volume-d2a94803-bc5e-4263-af91-73bc913a775a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.128477534s
STEP: Saw pod success
Dec 15 14:10:06.021: INFO: Pod "downwardapi-volume-d2a94803-bc5e-4263-af91-73bc913a775a" satisfied condition "success or failure"
Dec 15 14:10:06.026: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-d2a94803-bc5e-4263-af91-73bc913a775a container client-container: 
STEP: delete the pod
Dec 15 14:10:06.160: INFO: Waiting for pod downwardapi-volume-d2a94803-bc5e-4263-af91-73bc913a775a to disappear
Dec 15 14:10:06.168: INFO: Pod downwardapi-volume-d2a94803-bc5e-4263-af91-73bc913a775a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:10:06.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5866" for this suite.
Dec 15 14:10:12.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:10:12.290: INFO: namespace downward-api-5866 deletion completed in 6.116831778s

• [SLOW TEST:16.501 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:10:12.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-4cd03358-d118-43f4-ac21-2f5d8d1fae4d
STEP: Creating a pod to test consume configMaps
Dec 15 14:10:12.445: INFO: Waiting up to 5m0s for pod "pod-configmaps-b75cea1c-4c0e-4da3-acea-32a69e9149b2" in namespace "configmap-1182" to be "success or failure"
Dec 15 14:10:12.463: INFO: Pod "pod-configmaps-b75cea1c-4c0e-4da3-acea-32a69e9149b2": Phase="Pending", Reason="", readiness=false. Elapsed: 17.847141ms
Dec 15 14:10:14.483: INFO: Pod "pod-configmaps-b75cea1c-4c0e-4da3-acea-32a69e9149b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037451126s
Dec 15 14:10:16.493: INFO: Pod "pod-configmaps-b75cea1c-4c0e-4da3-acea-32a69e9149b2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047330757s
Dec 15 14:10:18.507: INFO: Pod "pod-configmaps-b75cea1c-4c0e-4da3-acea-32a69e9149b2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062130014s
Dec 15 14:10:20.530: INFO: Pod "pod-configmaps-b75cea1c-4c0e-4da3-acea-32a69e9149b2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.084755641s
Dec 15 14:10:22.562: INFO: Pod "pod-configmaps-b75cea1c-4c0e-4da3-acea-32a69e9149b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.116546633s
STEP: Saw pod success
Dec 15 14:10:22.562: INFO: Pod "pod-configmaps-b75cea1c-4c0e-4da3-acea-32a69e9149b2" satisfied condition "success or failure"
Dec 15 14:10:22.592: INFO: Trying to get logs from node iruya-node pod pod-configmaps-b75cea1c-4c0e-4da3-acea-32a69e9149b2 container configmap-volume-test: 
STEP: delete the pod
Dec 15 14:10:22.805: INFO: Waiting for pod pod-configmaps-b75cea1c-4c0e-4da3-acea-32a69e9149b2 to disappear
Dec 15 14:10:22.811: INFO: Pod pod-configmaps-b75cea1c-4c0e-4da3-acea-32a69e9149b2 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:10:22.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1182" for this suite.
Dec 15 14:10:28.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:10:28.954: INFO: namespace configmap-1182 deletion completed in 6.138615921s

• [SLOW TEST:16.664 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:10:28.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Dec 15 14:10:38.322: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:10:38.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5678" for this suite.
Dec 15 14:10:44.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:10:44.688: INFO: namespace container-runtime-5678 deletion completed in 6.237061137s

• [SLOW TEST:15.733 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:10:44.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-70d31804-c195-4d70-9360-d5ec5c7ef7e4
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-70d31804-c195-4d70-9360-d5ec5c7ef7e4
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:10:55.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5720" for this suite.
Dec 15 14:11:17.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:11:17.340: INFO: namespace configmap-5720 deletion completed in 22.236948089s

• [SLOW TEST:32.651 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:11:17.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Dec 15 14:11:35.835: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 15 14:11:35.849: INFO: Pod pod-with-poststart-http-hook still exists
Dec 15 14:11:37.850: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 15 14:11:37.868: INFO: Pod pod-with-poststart-http-hook still exists
Dec 15 14:11:39.850: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 15 14:11:39.862: INFO: Pod pod-with-poststart-http-hook still exists
Dec 15 14:11:41.850: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 15 14:11:41.865: INFO: Pod pod-with-poststart-http-hook still exists
Dec 15 14:11:43.850: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 15 14:11:43.872: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:11:43.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-5394" for this suite.
Dec 15 14:12:05.926: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:12:06.039: INFO: namespace container-lifecycle-hook-5394 deletion completed in 22.152028185s

• [SLOW TEST:48.699 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:12:06.041: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Dec 15 14:12:06.209: INFO: Waiting up to 5m0s for pod "downward-api-ce3b2d21-dde6-45ec-adfb-a7648c0b0758" in namespace "downward-api-5978" to be "success or failure"
Dec 15 14:12:06.248: INFO: Pod "downward-api-ce3b2d21-dde6-45ec-adfb-a7648c0b0758": Phase="Pending", Reason="", readiness=false. Elapsed: 38.820315ms
Dec 15 14:12:08.258: INFO: Pod "downward-api-ce3b2d21-dde6-45ec-adfb-a7648c0b0758": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048555233s
Dec 15 14:12:10.289: INFO: Pod "downward-api-ce3b2d21-dde6-45ec-adfb-a7648c0b0758": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080117243s
Dec 15 14:12:12.311: INFO: Pod "downward-api-ce3b2d21-dde6-45ec-adfb-a7648c0b0758": Phase="Pending", Reason="", readiness=false. Elapsed: 6.101414203s
Dec 15 14:12:14.321: INFO: Pod "downward-api-ce3b2d21-dde6-45ec-adfb-a7648c0b0758": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.112015505s
STEP: Saw pod success
Dec 15 14:12:14.322: INFO: Pod "downward-api-ce3b2d21-dde6-45ec-adfb-a7648c0b0758" satisfied condition "success or failure"
Dec 15 14:12:14.326: INFO: Trying to get logs from node iruya-node pod downward-api-ce3b2d21-dde6-45ec-adfb-a7648c0b0758 container dapi-container: 
STEP: delete the pod
Dec 15 14:12:14.435: INFO: Waiting for pod downward-api-ce3b2d21-dde6-45ec-adfb-a7648c0b0758 to disappear
Dec 15 14:12:14.449: INFO: Pod downward-api-ce3b2d21-dde6-45ec-adfb-a7648c0b0758 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:12:14.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5978" for this suite.
Dec 15 14:12:20.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:12:20.734: INFO: namespace downward-api-5978 deletion completed in 6.265835551s

• [SLOW TEST:14.693 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:12:20.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-34526917-3c3f-4c29-864c-1e9834d06519
STEP: Creating a pod to test consume configMaps
Dec 15 14:12:20.964: INFO: Waiting up to 5m0s for pod "pod-configmaps-c9210f91-ed4a-4da3-92d6-827b044bf81d" in namespace "configmap-6274" to be "success or failure"
Dec 15 14:12:20.974: INFO: Pod "pod-configmaps-c9210f91-ed4a-4da3-92d6-827b044bf81d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.929068ms
Dec 15 14:12:22.984: INFO: Pod "pod-configmaps-c9210f91-ed4a-4da3-92d6-827b044bf81d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019843866s
Dec 15 14:12:24.994: INFO: Pod "pod-configmaps-c9210f91-ed4a-4da3-92d6-827b044bf81d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029993814s
Dec 15 14:12:27.009: INFO: Pod "pod-configmaps-c9210f91-ed4a-4da3-92d6-827b044bf81d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044506611s
Dec 15 14:12:29.015: INFO: Pod "pod-configmaps-c9210f91-ed4a-4da3-92d6-827b044bf81d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050485174s
Dec 15 14:12:31.027: INFO: Pod "pod-configmaps-c9210f91-ed4a-4da3-92d6-827b044bf81d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.06276841s
STEP: Saw pod success
Dec 15 14:12:31.027: INFO: Pod "pod-configmaps-c9210f91-ed4a-4da3-92d6-827b044bf81d" satisfied condition "success or failure"
Dec 15 14:12:31.042: INFO: Trying to get logs from node iruya-node pod pod-configmaps-c9210f91-ed4a-4da3-92d6-827b044bf81d container configmap-volume-test: 
STEP: delete the pod
Dec 15 14:12:31.112: INFO: Waiting for pod pod-configmaps-c9210f91-ed4a-4da3-92d6-827b044bf81d to disappear
Dec 15 14:12:31.124: INFO: Pod pod-configmaps-c9210f91-ed4a-4da3-92d6-827b044bf81d no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:12:31.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6274" for this suite.
Dec 15 14:12:37.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:12:37.350: INFO: namespace configmap-6274 deletion completed in 6.220940003s

• [SLOW TEST:16.616 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:12:37.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Dec 15 14:12:37.556: INFO: Waiting up to 5m0s for pod "client-containers-7d31d338-67e5-4f8a-9cca-9e5e9d7ae22f" in namespace "containers-376" to be "success or failure"
Dec 15 14:12:37.646: INFO: Pod "client-containers-7d31d338-67e5-4f8a-9cca-9e5e9d7ae22f": Phase="Pending", Reason="", readiness=false. Elapsed: 89.899683ms
Dec 15 14:12:39.670: INFO: Pod "client-containers-7d31d338-67e5-4f8a-9cca-9e5e9d7ae22f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113925954s
Dec 15 14:12:41.681: INFO: Pod "client-containers-7d31d338-67e5-4f8a-9cca-9e5e9d7ae22f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.124743424s
Dec 15 14:12:43.701: INFO: Pod "client-containers-7d31d338-67e5-4f8a-9cca-9e5e9d7ae22f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.145453136s
Dec 15 14:12:45.710: INFO: Pod "client-containers-7d31d338-67e5-4f8a-9cca-9e5e9d7ae22f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.153614995s
STEP: Saw pod success
Dec 15 14:12:45.710: INFO: Pod "client-containers-7d31d338-67e5-4f8a-9cca-9e5e9d7ae22f" satisfied condition "success or failure"
Dec 15 14:12:45.714: INFO: Trying to get logs from node iruya-node pod client-containers-7d31d338-67e5-4f8a-9cca-9e5e9d7ae22f container test-container: 
STEP: delete the pod
Dec 15 14:12:45.760: INFO: Waiting for pod client-containers-7d31d338-67e5-4f8a-9cca-9e5e9d7ae22f to disappear
Dec 15 14:12:45.768: INFO: Pod client-containers-7d31d338-67e5-4f8a-9cca-9e5e9d7ae22f no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:12:45.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-376" for this suite.
Dec 15 14:12:51.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:12:52.176: INFO: namespace containers-376 deletion completed in 6.387663911s

• [SLOW TEST:14.824 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:12:52.177: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Dec 15 14:12:52.338: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix896541868/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:12:52.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3673" for this suite.
Dec 15 14:12:58.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:12:58.635: INFO: namespace kubectl-3673 deletion completed in 6.202974336s

• [SLOW TEST:6.458 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:12:58.635: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Dec 15 14:12:58.798: INFO: Number of nodes with available pods: 0
Dec 15 14:12:58.798: INFO: Node iruya-node is running more than one daemon pod
Dec 15 14:13:00.383: INFO: Number of nodes with available pods: 0
Dec 15 14:13:00.384: INFO: Node iruya-node is running more than one daemon pod
Dec 15 14:13:00.889: INFO: Number of nodes with available pods: 0
Dec 15 14:13:00.889: INFO: Node iruya-node is running more than one daemon pod
Dec 15 14:13:01.810: INFO: Number of nodes with available pods: 0
Dec 15 14:13:01.810: INFO: Node iruya-node is running more than one daemon pod
Dec 15 14:13:02.822: INFO: Number of nodes with available pods: 0
Dec 15 14:13:02.822: INFO: Node iruya-node is running more than one daemon pod
Dec 15 14:13:05.014: INFO: Number of nodes with available pods: 0
Dec 15 14:13:05.014: INFO: Node iruya-node is running more than one daemon pod
Dec 15 14:13:06.308: INFO: Number of nodes with available pods: 0
Dec 15 14:13:06.308: INFO: Node iruya-node is running more than one daemon pod
Dec 15 14:13:06.915: INFO: Number of nodes with available pods: 0
Dec 15 14:13:06.915: INFO: Node iruya-node is running more than one daemon pod
Dec 15 14:13:07.840: INFO: Number of nodes with available pods: 1
Dec 15 14:13:07.840: INFO: Node iruya-node is running more than one daemon pod
Dec 15 14:13:08.824: INFO: Number of nodes with available pods: 1
Dec 15 14:13:08.824: INFO: Node iruya-node is running more than one daemon pod
Dec 15 14:13:09.829: INFO: Number of nodes with available pods: 2
Dec 15 14:13:09.829: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Dec 15 14:13:09.914: INFO: Number of nodes with available pods: 1
Dec 15 14:13:09.915: INFO: Node iruya-node is running more than one daemon pod
Dec 15 14:13:10.935: INFO: Number of nodes with available pods: 1
Dec 15 14:13:10.935: INFO: Node iruya-node is running more than one daemon pod
Dec 15 14:13:11.934: INFO: Number of nodes with available pods: 1
Dec 15 14:13:11.934: INFO: Node iruya-node is running more than one daemon pod
Dec 15 14:13:12.928: INFO: Number of nodes with available pods: 1
Dec 15 14:13:12.928: INFO: Node iruya-node is running more than one daemon pod
Dec 15 14:13:13.927: INFO: Number of nodes with available pods: 1
Dec 15 14:13:13.927: INFO: Node iruya-node is running more than one daemon pod
Dec 15 14:13:14.934: INFO: Number of nodes with available pods: 1
Dec 15 14:13:14.934: INFO: Node iruya-node is running more than one daemon pod
Dec 15 14:13:15.928: INFO: Number of nodes with available pods: 1
Dec 15 14:13:15.928: INFO: Node iruya-node is running more than one daemon pod
Dec 15 14:13:16.978: INFO: Number of nodes with available pods: 1
Dec 15 14:13:16.981: INFO: Node iruya-node is running more than one daemon pod
Dec 15 14:13:17.931: INFO: Number of nodes with available pods: 1
Dec 15 14:13:17.931: INFO: Node iruya-node is running more than one daemon pod
Dec 15 14:13:18.935: INFO: Number of nodes with available pods: 2
Dec 15 14:13:18.936: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8048, will wait for the garbage collector to delete the pods
Dec 15 14:13:19.018: INFO: Deleting DaemonSet.extensions daemon-set took: 18.901557ms
Dec 15 14:13:19.319: INFO: Terminating DaemonSet.extensions daemon-set pods took: 301.505424ms
Dec 15 14:13:36.746: INFO: Number of nodes with available pods: 0
Dec 15 14:13:36.746: INFO: Number of running nodes: 0, number of available pods: 0
Dec 15 14:13:36.752: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8048/daemonsets","resourceVersion":"16769979"},"items":null}

Dec 15 14:13:36.756: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8048/pods","resourceVersion":"16769979"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:13:36.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8048" for this suite.
Dec 15 14:13:42.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:13:42.940: INFO: namespace daemonsets-8048 deletion completed in 6.150063759s

• [SLOW TEST:44.305 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:13:42.941: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Dec 15 14:13:43.008: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Dec 15 14:13:43.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1120'
Dec 15 14:13:43.566: INFO: stderr: ""
Dec 15 14:13:43.566: INFO: stdout: "service/redis-slave created\n"
Dec 15 14:13:43.567: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Dec 15 14:13:43.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1120'
Dec 15 14:13:44.193: INFO: stderr: ""
Dec 15 14:13:44.193: INFO: stdout: "service/redis-master created\n"
Dec 15 14:13:44.195: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Dec 15 14:13:44.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1120'
Dec 15 14:13:44.765: INFO: stderr: ""
Dec 15 14:13:44.765: INFO: stdout: "service/frontend created\n"
Dec 15 14:13:44.766: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Dec 15 14:13:44.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1120'
Dec 15 14:13:45.127: INFO: stderr: ""
Dec 15 14:13:45.127: INFO: stdout: "deployment.apps/frontend created\n"
Dec 15 14:13:45.128: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Dec 15 14:13:45.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1120'
Dec 15 14:13:45.682: INFO: stderr: ""
Dec 15 14:13:45.683: INFO: stdout: "deployment.apps/redis-master created\n"
Dec 15 14:13:45.684: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Dec 15 14:13:45.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1120'
Dec 15 14:13:47.927: INFO: stderr: ""
Dec 15 14:13:47.928: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Dec 15 14:13:47.928: INFO: Waiting for all frontend pods to be Running.
Dec 15 14:14:12.980: INFO: Waiting for frontend to serve content.
Dec 15 14:14:13.066: INFO: Trying to add a new entry to the guestbook.
Dec 15 14:14:13.102: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Dec 15 14:14:13.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1120'
Dec 15 14:14:13.259: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 15 14:14:13.259: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Dec 15 14:14:13.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1120'
Dec 15 14:14:13.422: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 15 14:14:13.422: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Dec 15 14:14:13.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1120'
Dec 15 14:14:13.563: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 15 14:14:13.563: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Dec 15 14:14:13.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1120'
Dec 15 14:14:13.780: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 15 14:14:13.780: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Dec 15 14:14:13.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1120'
Dec 15 14:14:13.973: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 15 14:14:13.973: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Dec 15 14:14:13.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1120'
Dec 15 14:14:14.415: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 15 14:14:14.415: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:14:14.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1120" for this suite.
Dec 15 14:14:58.591: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:14:58.755: INFO: namespace kubectl-1120 deletion completed in 44.325355907s

• [SLOW TEST:75.814 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:14:58.757: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:15:29.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-9061" for this suite.
Dec 15 14:15:35.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:15:35.320: INFO: namespace namespaces-9061 deletion completed in 6.208815381s
STEP: Destroying namespace "nsdeletetest-8958" for this suite.
Dec 15 14:15:35.324: INFO: Namespace nsdeletetest-8958 was already deleted
STEP: Destroying namespace "nsdeletetest-7857" for this suite.
Dec 15 14:15:41.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:15:41.596: INFO: namespace nsdeletetest-7857 deletion completed in 6.272299105s

• [SLOW TEST:42.840 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:15:41.597: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Dec 15 14:15:41.742: INFO: PodSpec: initContainers in spec.initContainers
Dec 15 14:16:44.477: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-5c655cda-8e92-44b3-b1df-edada8d2511d", GenerateName:"", Namespace:"init-container-7984", SelfLink:"/api/v1/namespaces/init-container-7984/pods/pod-init-5c655cda-8e92-44b3-b1df-edada8d2511d", UID:"da86d53f-aaee-48c5-bbe1-92476dbc0476", ResourceVersion:"16770516", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63712016141, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"742467742"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-r2s88", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002464180), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-r2s88", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-r2s88", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-r2s88", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001f13468), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0031de540), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001f13670)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001f136b0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001f136b8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001f136bc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712016141, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712016141, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712016141, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712016141, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc0019d71c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002d46ee0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002d46f50)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://d6a7657d03aeaf02f68a6de33bb7cbb8b7809bd1a7f0c19a5040e0cb7d0df1a2"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0019d7220), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0019d71e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:16:44.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7984" for this suite.
Dec 15 14:17:06.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:17:06.737: INFO: namespace init-container-7984 deletion completed in 22.16167369s

• [SLOW TEST:85.141 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:17:06.737: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 15 14:17:06.891: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Dec 15 14:17:10.087: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:17:10.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7911" for this suite.
Dec 15 14:17:20.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:17:20.474: INFO: namespace replication-controller-7911 deletion completed in 10.312499782s

• [SLOW TEST:13.736 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:17:20.475: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-cgtc
STEP: Creating a pod to test atomic-volume-subpath
Dec 15 14:17:20.690: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-cgtc" in namespace "subpath-661" to be "success or failure"
Dec 15 14:17:20.705: INFO: Pod "pod-subpath-test-downwardapi-cgtc": Phase="Pending", Reason="", readiness=false. Elapsed: 14.586703ms
Dec 15 14:17:22.715: INFO: Pod "pod-subpath-test-downwardapi-cgtc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02437172s
Dec 15 14:17:24.740: INFO: Pod "pod-subpath-test-downwardapi-cgtc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04943775s
Dec 15 14:17:26.754: INFO: Pod "pod-subpath-test-downwardapi-cgtc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063340197s
Dec 15 14:17:28.762: INFO: Pod "pod-subpath-test-downwardapi-cgtc": Phase="Running", Reason="", readiness=true. Elapsed: 8.071554327s
Dec 15 14:17:30.769: INFO: Pod "pod-subpath-test-downwardapi-cgtc": Phase="Running", Reason="", readiness=true. Elapsed: 10.078473062s
Dec 15 14:17:32.775: INFO: Pod "pod-subpath-test-downwardapi-cgtc": Phase="Running", Reason="", readiness=true. Elapsed: 12.085052376s
Dec 15 14:17:34.783: INFO: Pod "pod-subpath-test-downwardapi-cgtc": Phase="Running", Reason="", readiness=true. Elapsed: 14.092993285s
Dec 15 14:17:36.794: INFO: Pod "pod-subpath-test-downwardapi-cgtc": Phase="Running", Reason="", readiness=true. Elapsed: 16.10335913s
Dec 15 14:17:38.807: INFO: Pod "pod-subpath-test-downwardapi-cgtc": Phase="Running", Reason="", readiness=true. Elapsed: 18.116932879s
Dec 15 14:17:40.815: INFO: Pod "pod-subpath-test-downwardapi-cgtc": Phase="Running", Reason="", readiness=true. Elapsed: 20.125147389s
Dec 15 14:17:42.827: INFO: Pod "pod-subpath-test-downwardapi-cgtc": Phase="Running", Reason="", readiness=true. Elapsed: 22.137120367s
Dec 15 14:17:44.836: INFO: Pod "pod-subpath-test-downwardapi-cgtc": Phase="Running", Reason="", readiness=true. Elapsed: 24.145931637s
Dec 15 14:17:46.845: INFO: Pod "pod-subpath-test-downwardapi-cgtc": Phase="Running", Reason="", readiness=true. Elapsed: 26.154579159s
Dec 15 14:17:48.863: INFO: Pod "pod-subpath-test-downwardapi-cgtc": Phase="Running", Reason="", readiness=true. Elapsed: 28.172286188s
Dec 15 14:17:50.912: INFO: Pod "pod-subpath-test-downwardapi-cgtc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.221406158s
STEP: Saw pod success
Dec 15 14:17:50.912: INFO: Pod "pod-subpath-test-downwardapi-cgtc" satisfied condition "success or failure"
Dec 15 14:17:50.918: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-cgtc container test-container-subpath-downwardapi-cgtc: 
STEP: delete the pod
Dec 15 14:17:50.977: INFO: Waiting for pod pod-subpath-test-downwardapi-cgtc to disappear
Dec 15 14:17:50.988: INFO: Pod pod-subpath-test-downwardapi-cgtc no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-cgtc
Dec 15 14:17:50.988: INFO: Deleting pod "pod-subpath-test-downwardapi-cgtc" in namespace "subpath-661"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:17:50.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-661" for this suite.
Dec 15 14:17:57.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:17:57.217: INFO: namespace subpath-661 deletion completed in 6.219004564s

• [SLOW TEST:36.743 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:17:57.218: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 15 14:17:57.311: INFO: Creating deployment "test-recreate-deployment"
Dec 15 14:17:57.319: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Dec 15 14:17:57.331: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Dec 15 14:17:59.346: INFO: Waiting deployment "test-recreate-deployment" to complete
Dec 15 14:17:59.350: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712016277, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712016277, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712016277, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712016277, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 15 14:18:01.357: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712016277, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712016277, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712016277, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712016277, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 15 14:18:03.363: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712016277, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712016277, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712016277, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712016277, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 15 14:18:05.367: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712016277, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712016277, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712016277, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712016277, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 15 14:18:07.365: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Dec 15 14:18:07.392: INFO: Updating deployment test-recreate-deployment
Dec 15 14:18:07.392: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 15 14:18:07.854: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-5338,SelfLink:/apis/apps/v1/namespaces/deployment-5338/deployments/test-recreate-deployment,UID:0a5f9ceb-5cb9-4532-90d5-90e6e057148a,ResourceVersion:16770769,Generation:2,CreationTimestamp:2019-12-15 14:17:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2019-12-15 14:18:07 +0000 UTC 2019-12-15 14:18:07 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-12-15 14:18:07 +0000 UTC 2019-12-15 14:17:57 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Dec 15 14:18:07.875: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-5338,SelfLink:/apis/apps/v1/namespaces/deployment-5338/replicasets/test-recreate-deployment-5c8c9cc69d,UID:54eafe6c-a6d6-409d-a257-12aa08082a20,ResourceVersion:16770766,Generation:1,CreationTimestamp:2019-12-15 14:18:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 0a5f9ceb-5cb9-4532-90d5-90e6e057148a 0xc00270cae7 0xc00270cae8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 15 14:18:07.875: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Dec 15 14:18:07.875: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-5338,SelfLink:/apis/apps/v1/namespaces/deployment-5338/replicasets/test-recreate-deployment-6df85df6b9,UID:a544e133-894e-4209-9f7e-032413bfbd56,ResourceVersion:16770756,Generation:2,CreationTimestamp:2019-12-15 14:17:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 0a5f9ceb-5cb9-4532-90d5-90e6e057148a 0xc00270cbb7 0xc00270cbb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 15 14:18:07.881: INFO: Pod "test-recreate-deployment-5c8c9cc69d-rk79z" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-rk79z,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-5338,SelfLink:/api/v1/namespaces/deployment-5338/pods/test-recreate-deployment-5c8c9cc69d-rk79z,UID:116674b8-68ca-41ef-a942-b353aff7e979,ResourceVersion:16770770,Generation:0,CreationTimestamp:2019-12-15 14:18:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 54eafe6c-a6d6-409d-a257-12aa08082a20 0xc00270d4a7 0xc00270d4a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zltlm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zltlm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zltlm true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00270d520} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00270d540}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:18:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:18:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:18:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:18:07 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-15 14:18:07 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:18:07.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5338" for this suite.
Dec 15 14:18:15.925: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:18:16.033: INFO: namespace deployment-5338 deletion completed in 8.142747052s

• [SLOW TEST:18.815 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:18:16.033: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 15 14:18:16.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-2947'
Dec 15 14:18:18.232: INFO: stderr: ""
Dec 15 14:18:18.232: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Dec 15 14:18:18.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-2947'
Dec 15 14:18:24.001: INFO: stderr: ""
Dec 15 14:18:24.001: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:18:24.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2947" for this suite.
Dec 15 14:18:30.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:18:30.197: INFO: namespace kubectl-2947 deletion completed in 6.151768415s

• [SLOW TEST:14.164 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:18:30.197: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Dec 15 14:18:30.268: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Dec 15 14:18:31.115: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712016310, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712016310, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712016310, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712016310, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 15 14:18:33.126: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712016310, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712016310, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712016310, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712016310, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 15 14:18:35.130: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712016310, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712016310, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712016310, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712016310, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 15 14:18:37.126: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712016310, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712016310, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712016310, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712016310, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 15 14:18:39.122: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712016310, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712016310, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712016310, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712016310, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 15 14:18:41.149: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712016310, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712016310, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712016310, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712016310, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 15 14:18:46.652: INFO: Waited 3.518746384s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:18:47.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-8308" for this suite.
Dec 15 14:18:53.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:18:53.290: INFO: namespace aggregator-8308 deletion completed in 6.183929295s

• [SLOW TEST:23.092 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:18:53.290: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 15 14:18:53.462: INFO: Waiting up to 5m0s for pod "pod-83d78cd6-6b3c-4dc9-aa57-3320d753f41a" in namespace "emptydir-1423" to be "success or failure"
Dec 15 14:18:53.472: INFO: Pod "pod-83d78cd6-6b3c-4dc9-aa57-3320d753f41a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.836733ms
Dec 15 14:18:55.528: INFO: Pod "pod-83d78cd6-6b3c-4dc9-aa57-3320d753f41a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064991267s
Dec 15 14:18:57.538: INFO: Pod "pod-83d78cd6-6b3c-4dc9-aa57-3320d753f41a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074985159s
Dec 15 14:18:59.576: INFO: Pod "pod-83d78cd6-6b3c-4dc9-aa57-3320d753f41a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112989099s
Dec 15 14:19:01.613: INFO: Pod "pod-83d78cd6-6b3c-4dc9-aa57-3320d753f41a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.150598076s
Dec 15 14:19:03.629: INFO: Pod "pod-83d78cd6-6b3c-4dc9-aa57-3320d753f41a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.166848727s
STEP: Saw pod success
Dec 15 14:19:03.630: INFO: Pod "pod-83d78cd6-6b3c-4dc9-aa57-3320d753f41a" satisfied condition "success or failure"
Dec 15 14:19:03.636: INFO: Trying to get logs from node iruya-node pod pod-83d78cd6-6b3c-4dc9-aa57-3320d753f41a container test-container: 
STEP: delete the pod
Dec 15 14:19:03.817: INFO: Waiting for pod pod-83d78cd6-6b3c-4dc9-aa57-3320d753f41a to disappear
Dec 15 14:19:03.826: INFO: Pod pod-83d78cd6-6b3c-4dc9-aa57-3320d753f41a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:19:03.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1423" for this suite.
Dec 15 14:19:09.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:19:09.983: INFO: namespace emptydir-1423 deletion completed in 6.15080878s

• [SLOW TEST:16.693 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:19:09.984: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Dec 15 14:19:10.043: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:19:23.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2589" for this suite.
Dec 15 14:19:29.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:19:29.483: INFO: namespace init-container-2589 deletion completed in 6.162497477s

• [SLOW TEST:19.500 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:19:29.484: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Dec 15 14:22:31.804: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:22:31.865: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:22:33.865: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:22:33.877: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:22:35.865: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:22:35.887: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:22:37.865: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:22:37.874: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:22:39.865: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:22:39.877: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:22:41.865: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:22:41.884: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:22:43.866: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:22:43.879: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:22:45.865: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:22:46.007: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:22:47.865: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:22:47.878: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:22:49.866: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:22:49.891: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:22:51.866: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:22:51.877: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:22:53.866: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:22:53.882: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:22:55.865: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:22:55.875: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:22:57.865: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:22:57.881: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:22:59.865: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:22:59.886: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:23:01.865: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:23:01.877: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:23:03.865: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:23:03.876: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:23:05.865: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:23:05.874: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:23:07.865: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:23:07.883: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:23:09.865: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:23:09.878: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:23:11.866: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:23:11.914: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:23:13.866: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:23:13.885: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:23:15.866: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:23:15.887: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:23:17.865: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:23:17.877: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:23:19.865: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:23:19.883: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:23:21.866: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:23:21.881: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:23:23.866: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:23:23.878: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:23:25.866: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:23:25.880: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:23:27.865: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:23:27.877: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:23:29.865: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:23:29.883: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:23:31.865: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:23:31.890: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:23:33.866: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:23:33.878: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:23:35.866: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:23:35.896: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:23:37.866: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:23:37.878: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:23:39.865: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:23:39.876: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:23:41.865: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:23:41.883: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:23:43.866: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:23:43.882: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:23:45.866: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:23:45.883: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:23:47.866: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:23:47.891: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:23:49.865: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:23:49.879: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:23:51.866: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:23:51.884: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:23:53.866: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:23:53.881: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:23:55.865: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:23:55.889: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:23:57.866: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:23:57.880: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:23:59.865: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:23:59.880: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:24:01.865: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:24:01.886: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:24:03.865: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:24:03.888: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:24:05.865: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:24:05.881: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:24:07.866: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:24:07.882: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:24:09.866: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:24:09.878: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:24:11.866: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:24:11.889: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:24:13.866: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:24:13.882: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:24:15.865: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:24:15.898: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 15 14:24:17.866: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 15 14:24:17.877: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:24:17.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7104" for this suite.
Dec 15 14:24:39.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:24:40.132: INFO: namespace container-lifecycle-hook-7104 deletion completed in 22.243362566s

• [SLOW TEST:310.648 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:24:40.133: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 15 14:24:48.528: INFO: Waiting up to 5m0s for pod "client-envvars-3a7543a7-63f0-4029-94cc-06c41a7ab2a0" in namespace "pods-4159" to be "success or failure"
Dec 15 14:24:48.730: INFO: Pod "client-envvars-3a7543a7-63f0-4029-94cc-06c41a7ab2a0": Phase="Pending", Reason="", readiness=false. Elapsed: 201.868274ms
Dec 15 14:24:50.741: INFO: Pod "client-envvars-3a7543a7-63f0-4029-94cc-06c41a7ab2a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213507162s
Dec 15 14:24:52.749: INFO: Pod "client-envvars-3a7543a7-63f0-4029-94cc-06c41a7ab2a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.220979433s
Dec 15 14:24:54.762: INFO: Pod "client-envvars-3a7543a7-63f0-4029-94cc-06c41a7ab2a0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.234634934s
Dec 15 14:24:56.785: INFO: Pod "client-envvars-3a7543a7-63f0-4029-94cc-06c41a7ab2a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.256972843s
STEP: Saw pod success
Dec 15 14:24:56.785: INFO: Pod "client-envvars-3a7543a7-63f0-4029-94cc-06c41a7ab2a0" satisfied condition "success or failure"
Dec 15 14:24:56.796: INFO: Trying to get logs from node iruya-node pod client-envvars-3a7543a7-63f0-4029-94cc-06c41a7ab2a0 container env3cont: 
STEP: delete the pod
Dec 15 14:24:56.970: INFO: Waiting for pod client-envvars-3a7543a7-63f0-4029-94cc-06c41a7ab2a0 to disappear
Dec 15 14:24:56.979: INFO: Pod client-envvars-3a7543a7-63f0-4029-94cc-06c41a7ab2a0 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:24:56.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4159" for this suite.
Dec 15 14:25:49.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:25:49.082: INFO: namespace pods-4159 deletion completed in 52.097118955s

• [SLOW TEST:68.949 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:25:49.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Dec 15 14:26:03.851: INFO: Successfully updated pod "annotationupdate6bbade85-c896-4537-8b29-8597259381ad"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:26:06.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8308" for this suite.
Dec 15 14:26:28.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:26:28.396: INFO: namespace projected-8308 deletion completed in 22.289372377s

• [SLOW TEST:39.315 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:26:28.397: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-7552
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 15 14:26:28.583: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 15 14:27:18.902: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7552 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 15 14:27:18.902: INFO: >>> kubeConfig: /root/.kube/config
Dec 15 14:27:19.481: INFO: Found all expected endpoints: [netserver-0]
Dec 15 14:27:19.487: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7552 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 15 14:27:19.487: INFO: >>> kubeConfig: /root/.kube/config
Dec 15 14:27:19.910: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:27:19.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-7552" for this suite.
Dec 15 14:27:47.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:27:48.039: INFO: namespace pod-network-test-7552 deletion completed in 28.118900252s

• [SLOW TEST:79.642 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:27:48.040: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-1959
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 15 14:27:48.193: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 15 14:28:40.494: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-1959 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 15 14:28:40.494: INFO: >>> kubeConfig: /root/.kube/config
Dec 15 14:28:40.957: INFO: Waiting for endpoints: map[]
Dec 15 14:28:40.966: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-1959 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 15 14:28:40.966: INFO: >>> kubeConfig: /root/.kube/config
Dec 15 14:28:41.312: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:28:41.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-1959" for this suite.
Dec 15 14:29:09.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:29:09.528: INFO: namespace pod-network-test-1959 deletion completed in 28.173334966s

• [SLOW TEST:81.488 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:29:09.528: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Dec 15 14:29:09.739: INFO: Waiting up to 5m0s for pod "downward-api-17dd3792-ea49-417b-b818-d97932aee0eb" in namespace "downward-api-4404" to be "success or failure"
Dec 15 14:29:09.755: INFO: Pod "downward-api-17dd3792-ea49-417b-b818-d97932aee0eb": Phase="Pending", Reason="", readiness=false. Elapsed: 16.138621ms
Dec 15 14:29:11.763: INFO: Pod "downward-api-17dd3792-ea49-417b-b818-d97932aee0eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024424164s
Dec 15 14:29:13.786: INFO: Pod "downward-api-17dd3792-ea49-417b-b818-d97932aee0eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046476439s
Dec 15 14:29:15.798: INFO: Pod "downward-api-17dd3792-ea49-417b-b818-d97932aee0eb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058724538s
Dec 15 14:29:17.814: INFO: Pod "downward-api-17dd3792-ea49-417b-b818-d97932aee0eb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.075338111s
Dec 15 14:29:19.830: INFO: Pod "downward-api-17dd3792-ea49-417b-b818-d97932aee0eb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.090958415s
Dec 15 14:29:21.841: INFO: Pod "downward-api-17dd3792-ea49-417b-b818-d97932aee0eb": Phase="Pending", Reason="", readiness=false. Elapsed: 12.101972795s
Dec 15 14:29:23.852: INFO: Pod "downward-api-17dd3792-ea49-417b-b818-d97932aee0eb": Phase="Pending", Reason="", readiness=false. Elapsed: 14.112949989s
Dec 15 14:29:25.873: INFO: Pod "downward-api-17dd3792-ea49-417b-b818-d97932aee0eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.13420378s
STEP: Saw pod success
Dec 15 14:29:25.873: INFO: Pod "downward-api-17dd3792-ea49-417b-b818-d97932aee0eb" satisfied condition "success or failure"
Dec 15 14:29:25.886: INFO: Trying to get logs from node iruya-node pod downward-api-17dd3792-ea49-417b-b818-d97932aee0eb container dapi-container: 
STEP: delete the pod
Dec 15 14:29:26.016: INFO: Waiting for pod downward-api-17dd3792-ea49-417b-b818-d97932aee0eb to disappear
Dec 15 14:29:26.027: INFO: Pod downward-api-17dd3792-ea49-417b-b818-d97932aee0eb no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:29:26.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4404" for this suite.
Dec 15 14:29:32.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:29:32.148: INFO: namespace downward-api-4404 deletion completed in 6.11477297s

• [SLOW TEST:22.620 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:29:32.149: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Dec 15 14:29:47.620: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:29:47.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1068" for this suite.
Dec 15 14:29:53.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:29:53.909: INFO: namespace container-runtime-1068 deletion completed in 6.122501087s

• [SLOW TEST:21.761 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:29:53.910: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Dec 15 14:30:28.338: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-270 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 15 14:30:28.338: INFO: >>> kubeConfig: /root/.kube/config
Dec 15 14:30:28.818: INFO: Exec stderr: ""
Dec 15 14:30:28.818: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-270 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 15 14:30:28.819: INFO: >>> kubeConfig: /root/.kube/config
Dec 15 14:30:29.202: INFO: Exec stderr: ""
Dec 15 14:30:29.202: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-270 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 15 14:30:29.202: INFO: >>> kubeConfig: /root/.kube/config
Dec 15 14:30:29.570: INFO: Exec stderr: ""
Dec 15 14:30:29.570: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-270 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 15 14:30:29.570: INFO: >>> kubeConfig: /root/.kube/config
Dec 15 14:30:29.935: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Dec 15 14:30:29.936: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-270 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 15 14:30:29.936: INFO: >>> kubeConfig: /root/.kube/config
Dec 15 14:30:30.451: INFO: Exec stderr: ""
Dec 15 14:30:30.451: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-270 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 15 14:30:30.451: INFO: >>> kubeConfig: /root/.kube/config
Dec 15 14:30:30.839: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Dec 15 14:30:30.840: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-270 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 15 14:30:30.840: INFO: >>> kubeConfig: /root/.kube/config
Dec 15 14:30:31.130: INFO: Exec stderr: ""
Dec 15 14:30:31.131: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-270 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 15 14:30:31.131: INFO: >>> kubeConfig: /root/.kube/config
Dec 15 14:30:31.391: INFO: Exec stderr: ""
Dec 15 14:30:31.391: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-270 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 15 14:30:31.392: INFO: >>> kubeConfig: /root/.kube/config
Dec 15 14:30:31.755: INFO: Exec stderr: ""
Dec 15 14:30:31.755: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-270 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 15 14:30:31.755: INFO: >>> kubeConfig: /root/.kube/config
Dec 15 14:30:32.155: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:30:32.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-270" for this suite.
Dec 15 14:31:36.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:31:36.327: INFO: namespace e2e-kubelet-etc-hosts-270 deletion completed in 1m4.158418744s

• [SLOW TEST:102.417 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:31:36.328: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 15 14:31:36.571: INFO: Waiting up to 5m0s for pod "downwardapi-volume-41797d66-b4b5-4066-9b13-6324c008911b" in namespace "projected-6432" to be "success or failure"
Dec 15 14:31:36.630: INFO: Pod "downwardapi-volume-41797d66-b4b5-4066-9b13-6324c008911b": Phase="Pending", Reason="", readiness=false. Elapsed: 58.918145ms
Dec 15 14:31:38.648: INFO: Pod "downwardapi-volume-41797d66-b4b5-4066-9b13-6324c008911b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077373131s
Dec 15 14:31:40.662: INFO: Pod "downwardapi-volume-41797d66-b4b5-4066-9b13-6324c008911b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091608787s
Dec 15 14:31:42.677: INFO: Pod "downwardapi-volume-41797d66-b4b5-4066-9b13-6324c008911b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.105936978s
Dec 15 14:31:44.689: INFO: Pod "downwardapi-volume-41797d66-b4b5-4066-9b13-6324c008911b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.117928231s
Dec 15 14:31:46.696: INFO: Pod "downwardapi-volume-41797d66-b4b5-4066-9b13-6324c008911b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.125538245s
Dec 15 14:31:48.703: INFO: Pod "downwardapi-volume-41797d66-b4b5-4066-9b13-6324c008911b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.132349771s
Dec 15 14:31:50.776: INFO: Pod "downwardapi-volume-41797d66-b4b5-4066-9b13-6324c008911b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.205007929s
STEP: Saw pod success
Dec 15 14:31:50.776: INFO: Pod "downwardapi-volume-41797d66-b4b5-4066-9b13-6324c008911b" satisfied condition "success or failure"
Dec 15 14:31:50.781: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-41797d66-b4b5-4066-9b13-6324c008911b container client-container: 
STEP: delete the pod
Dec 15 14:31:50.875: INFO: Waiting for pod downwardapi-volume-41797d66-b4b5-4066-9b13-6324c008911b to disappear
Dec 15 14:31:50.939: INFO: Pod downwardapi-volume-41797d66-b4b5-4066-9b13-6324c008911b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:31:50.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6432" for this suite.
Dec 15 14:31:56.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:31:57.064: INFO: namespace projected-6432 deletion completed in 6.101496475s

• [SLOW TEST:20.736 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:31:57.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Dec 15 14:31:57.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1822 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Dec 15 14:32:13.343: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n"
Dec 15 14:32:13.344: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:32:15.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1822" for this suite.
Dec 15 14:32:21.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:32:21.752: INFO: namespace kubectl-1822 deletion completed in 6.389303315s

• [SLOW TEST:24.687 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:32:21.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:33:21.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1539" for this suite.
Dec 15 14:33:46.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:33:46.126: INFO: namespace container-probe-1539 deletion completed in 24.122100336s

• [SLOW TEST:84.373 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:33:46.126: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-71538b2b-51eb-4870-a44e-f64aaad391ad
STEP: Creating a pod to test consume secrets
Dec 15 14:33:46.417: INFO: Waiting up to 5m0s for pod "pod-secrets-8a5829e6-a11f-430b-a062-b6743a4002b1" in namespace "secrets-291" to be "success or failure"
Dec 15 14:33:46.430: INFO: Pod "pod-secrets-8a5829e6-a11f-430b-a062-b6743a4002b1": Phase="Pending", Reason="", readiness=false. Elapsed: 12.260445ms
Dec 15 14:33:48.447: INFO: Pod "pod-secrets-8a5829e6-a11f-430b-a062-b6743a4002b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029646076s
Dec 15 14:33:50.457: INFO: Pod "pod-secrets-8a5829e6-a11f-430b-a062-b6743a4002b1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03965215s
Dec 15 14:33:52.509: INFO: Pod "pod-secrets-8a5829e6-a11f-430b-a062-b6743a4002b1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091227762s
Dec 15 14:33:54.524: INFO: Pod "pod-secrets-8a5829e6-a11f-430b-a062-b6743a4002b1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.106423633s
Dec 15 14:33:56.537: INFO: Pod "pod-secrets-8a5829e6-a11f-430b-a062-b6743a4002b1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.119341627s
Dec 15 14:33:58.552: INFO: Pod "pod-secrets-8a5829e6-a11f-430b-a062-b6743a4002b1": Phase="Pending", Reason="", readiness=false. Elapsed: 12.134587801s
Dec 15 14:34:00.568: INFO: Pod "pod-secrets-8a5829e6-a11f-430b-a062-b6743a4002b1": Phase="Pending", Reason="", readiness=false. Elapsed: 14.149959774s
Dec 15 14:34:02.583: INFO: Pod "pod-secrets-8a5829e6-a11f-430b-a062-b6743a4002b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.164980629s
STEP: Saw pod success
Dec 15 14:34:02.583: INFO: Pod "pod-secrets-8a5829e6-a11f-430b-a062-b6743a4002b1" satisfied condition "success or failure"
Dec 15 14:34:02.592: INFO: Trying to get logs from node iruya-node pod pod-secrets-8a5829e6-a11f-430b-a062-b6743a4002b1 container secret-volume-test: 
STEP: delete the pod
Dec 15 14:34:03.120: INFO: Waiting for pod pod-secrets-8a5829e6-a11f-430b-a062-b6743a4002b1 to disappear
Dec 15 14:34:03.128: INFO: Pod pod-secrets-8a5829e6-a11f-430b-a062-b6743a4002b1 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:34:03.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-291" for this suite.
Dec 15 14:34:09.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:34:09.341: INFO: namespace secrets-291 deletion completed in 6.203984758s

• [SLOW TEST:23.215 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:34:09.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-d062137b-0561-49c3-a0ed-b105ba98a643
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:34:09.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1095" for this suite.
Dec 15 14:34:15.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:34:15.696: INFO: namespace secrets-1095 deletion completed in 6.170955082s

• [SLOW TEST:6.355 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:34:15.697: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 15 14:34:16.199: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 26.993922ms)
Dec 15 14:34:16.210: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.995112ms)
Dec 15 14:34:16.218: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.460489ms)
Dec 15 14:34:16.228: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.539656ms)
Dec 15 14:34:16.237: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.101098ms)
Dec 15 14:34:16.246: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.05886ms)
Dec 15 14:34:16.403: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 157.469358ms)
Dec 15 14:34:16.413: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.102919ms)
Dec 15 14:34:16.422: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.659848ms)
Dec 15 14:34:16.434: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.513803ms)
Dec 15 14:34:16.455: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 21.077366ms)
Dec 15 14:34:16.471: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.658957ms)
Dec 15 14:34:16.482: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.055684ms)
Dec 15 14:34:16.497: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.632596ms)
Dec 15 14:34:16.506: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.712836ms)
Dec 15 14:34:16.515: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.338044ms)
Dec 15 14:34:16.525: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.157592ms)
Dec 15 14:34:16.533: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.994044ms)
Dec 15 14:34:16.541: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.06147ms)
Dec 15 14:34:16.551: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.484929ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:34:16.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-2664" for this suite.
Dec 15 14:34:22.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:34:22.805: INFO: namespace proxy-2664 deletion completed in 6.246298805s

• [SLOW TEST:7.108 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:34:22.805: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-6361
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-6361
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6361
Dec 15 14:34:23.061: INFO: Found 0 stateful pods, waiting for 1
Dec 15 14:34:33.070: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Dec 15 14:34:43.072: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Dec 15 14:34:43.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 15 14:34:43.844: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 15 14:34:43.844: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 15 14:34:43.844: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 15 14:34:43.862: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Dec 15 14:34:53.933: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 15 14:34:53.933: INFO: Waiting for statefulset status.replicas updated to 0
Dec 15 14:34:53.963: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Dec 15 14:34:53.963: INFO: ss-0  iruya-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:23 +0000 UTC  }]
Dec 15 14:34:53.963: INFO: 
Dec 15 14:34:53.963: INFO: StatefulSet ss has not reached scale 3, at 1
Dec 15 14:34:55.763: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.985063961s
Dec 15 14:34:57.297: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.185426346s
Dec 15 14:34:58.533: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.651158897s
Dec 15 14:34:59.542: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.414408081s
Dec 15 14:35:00.556: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.40594807s
Dec 15 14:35:01.585: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.391652829s
Dec 15 14:35:04.042: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.363626561s
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6361
Dec 15 14:35:05.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 15 14:35:08.305: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 15 14:35:08.305: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 15 14:35:08.305: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 15 14:35:08.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 15 14:35:08.878: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n"
Dec 15 14:35:08.879: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 15 14:35:08.879: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 15 14:35:08.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 15 14:35:09.379: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n"
Dec 15 14:35:09.379: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 15 14:35:09.379: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 15 14:35:09.422: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 15 14:35:09.422: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=false
Dec 15 14:35:19.435: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 15 14:35:19.435: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 15 14:35:19.435: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Dec 15 14:35:19.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 15 14:35:20.042: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 15 14:35:20.042: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 15 14:35:20.042: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 15 14:35:20.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 15 14:35:20.559: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 15 14:35:20.560: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 15 14:35:20.560: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 15 14:35:20.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 15 14:35:21.231: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 15 14:35:21.231: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 15 14:35:21.231: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 15 14:35:21.231: INFO: Waiting for statefulset status.replicas updated to 0
Dec 15 14:35:21.253: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Dec 15 14:35:31.271: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 15 14:35:31.271: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Dec 15 14:35:31.271: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Dec 15 14:35:31.298: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 15 14:35:31.298: INFO: ss-0  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:23 +0000 UTC  }]
Dec 15 14:35:31.298: INFO: ss-1  iruya-server-sfge57q7djm7  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:53 +0000 UTC  }]
Dec 15 14:35:31.298: INFO: ss-2  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:54 +0000 UTC  }]
Dec 15 14:35:31.298: INFO: 
Dec 15 14:35:31.298: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 15 14:35:34.036: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 15 14:35:34.036: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:23 +0000 UTC  }]
Dec 15 14:35:34.036: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:53 +0000 UTC  }]
Dec 15 14:35:34.036: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:54 +0000 UTC  }]
Dec 15 14:35:34.036: INFO: 
Dec 15 14:35:34.036: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 15 14:35:35.077: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 15 14:35:35.077: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:23 +0000 UTC  }]
Dec 15 14:35:35.077: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:53 +0000 UTC  }]
Dec 15 14:35:35.077: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:54 +0000 UTC  }]
Dec 15 14:35:35.077: INFO: 
Dec 15 14:35:35.077: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 15 14:35:36.087: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 15 14:35:36.087: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:23 +0000 UTC  }]
Dec 15 14:35:36.087: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:53 +0000 UTC  }]
Dec 15 14:35:36.087: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:54 +0000 UTC  }]
Dec 15 14:35:36.087: INFO: 
Dec 15 14:35:36.087: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 15 14:35:37.309: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 15 14:35:37.309: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:23 +0000 UTC  }]
Dec 15 14:35:37.310: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:53 +0000 UTC  }]
Dec 15 14:35:37.310: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:54 +0000 UTC  }]
Dec 15 14:35:37.310: INFO: 
Dec 15 14:35:37.310: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 15 14:35:38.398: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 15 14:35:38.398: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:23 +0000 UTC  }]
Dec 15 14:35:38.398: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:53 +0000 UTC  }]
Dec 15 14:35:38.398: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:54 +0000 UTC  }]
Dec 15 14:35:38.398: INFO: 
Dec 15 14:35:38.398: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 15 14:35:39.477: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 15 14:35:39.477: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:23 +0000 UTC  }]
Dec 15 14:35:39.477: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:53 +0000 UTC  }]
Dec 15 14:35:39.477: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:54 +0000 UTC  }]
Dec 15 14:35:39.477: INFO: 
Dec 15 14:35:39.477: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 15 14:35:40.505: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 15 14:35:40.505: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:23 +0000 UTC  }]
Dec 15 14:35:40.505: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:53 +0000 UTC  }]
Dec 15 14:35:40.505: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:35:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:34:54 +0000 UTC  }]
Dec 15 14:35:40.505: INFO: 
Dec 15 14:35:40.505: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6361
Dec 15 14:35:41.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 15 14:35:41.813: INFO: rc: 1
Dec 15 14:35:41.814: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc002e307b0 exit status 1   true [0xc000189d88 0xc000189db0 0xc000189df8] [0xc000189d88 0xc000189db0 0xc000189df8] [0xc000189da0 0xc000189dd8] [0xba6c50 0xba6c50] 0xc001a5fe60 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Dec 15 14:35:51.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 15 14:35:51.967: INFO: rc: 1
Dec 15 14:35:51.968: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000337bf0 exit status 1   true [0xc00137c0a0 0xc00137c0b8 0xc00137c0d8] [0xc00137c0a0 0xc00137c0b8 0xc00137c0d8] [0xc00137c0b0 0xc00137c0c8] [0xba6c50 0xba6c50] 0xc002537c80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 15 14:36:01.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 15 14:36:02.160: INFO: rc: 1
Dec 15 14:36:02.160: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000337ce0 exit status 1   true [0xc00137c0e0 0xc00137c0f8 0xc00137c110] [0xc00137c0e0 0xc00137c0f8 0xc00137c110] [0xc00137c0f0 0xc00137c108] [0xba6c50 0xba6c50] 0xc0029903c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 15 14:36:12.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 15 14:36:12.393: INFO: rc: 1
Dec 15 14:36:12.394: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0021c1ad0 exit status 1   true [0xc0019200f0 0xc001920108 0xc001920120] [0xc0019200f0 0xc001920108 0xc001920120] [0xc001920100 0xc001920118] [0xba6c50 0xba6c50] 0xc002c54d80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 15 14:36:22.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 15 14:36:22.578: INFO: rc: 1
Dec 15 14:36:22.579: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0021c1bc0 exit status 1   true [0xc001920128 0xc001920140 0xc001920158] [0xc001920128 0xc001920140 0xc001920158] [0xc001920138 0xc001920150] [0xba6c50 0xba6c50] 0xc002c553e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 15 14:36:32.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 15 14:36:32.674: INFO: rc: 1
Dec 15 14:36:32.674: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0021c1c80 exit status 1   true [0xc001920160 0xc001920178 0xc001920190] [0xc001920160 0xc001920178 0xc001920190] [0xc001920170 0xc001920188] [0xba6c50 0xba6c50] 0xc002c55740 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 15 14:36:42.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 15 14:36:42.799: INFO: rc: 1
Dec 15 14:36:42.799: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0021c1d40 exit status 1   true [0xc001920198 0xc0019201b0 0xc0019201c8] [0xc001920198 0xc0019201b0 0xc0019201c8] [0xc0019201a8 0xc0019201c0] [0xba6c50 0xba6c50] 0xc002c55aa0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 15 14:36:52.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 15 14:36:53.048: INFO: rc: 1
Dec 15 14:36:53.048: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0025ce7e0 exit status 1   true [0xc0019201d8 0xc0019201f0 0xc001920208] [0xc0019201d8 0xc0019201f0 0xc001920208] [0xc0019201e8 0xc001920200] [0xba6c50 0xba6c50] 0xc002d16000 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 15 14:37:03.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 15 14:37:03.232: INFO: rc: 1
Dec 15 14:37:03.233: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0024ee090 exit status 1   true [0xc00035d830 0xc00035d9a8 0xc00035dac0] [0xc00035d830 0xc00035d9a8 0xc00035dac0] [0xc00035d958 0xc00035da60] [0xba6c50 0xba6c50] 0xc002d34780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 15 14:37:13.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 15 14:37:13.429: INFO: rc: 1
Dec 15 14:37:13.430: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0024ee180 exit status 1   true [0xc00035daf8 0xc00035db18 0xc00035db68] [0xc00035daf8 0xc00035db18 0xc00035db68] [0xc00035db10 0xc00035db50] [0xba6c50 0xba6c50] 0xc002d34f60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 15 14:37:23.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 15 14:37:23.588: INFO: rc: 1
Dec 15 14:37:23.588: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00302a090 exit status 1   true [0xc000bb6000 0xc000bb61a0 0xc000bb6448] [0xc000bb6000 0xc000bb61a0 0xc000bb6448] [0xc000bb6190 0xc000bb6420] [0xba6c50 0xba6c50] 0xc0028444e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 15 14:37:33.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 15 14:37:33.742: INFO: rc: 1
Dec 15 14:37:33.742: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0018c0090 exit status 1   true [0xc00137c008 0xc00137c050 0xc00137c090] [0xc00137c008 0xc00137c050 0xc00137c090] [0xc00137c030 0xc00137c088] [0xba6c50 0xba6c50] 0xc002536300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 15 14:37:43.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 15 14:37:43.925: INFO: rc: 1
Dec 15 14:37:43.925: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00302a180 exit status 1   true [0xc000bb6470 0xc000bb66c8 0xc000bb6b10] [0xc000bb6470 0xc000bb66c8 0xc000bb6b10] [0xc000bb66a8 0xc000bb6900] [0xba6c50 0xba6c50] 0xc002844900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 15 14:37:53.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 15 14:37:54.059: INFO: rc: 1
Dec 15 14:37:54.060: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00302a270 exit status 1   true [0xc000bb6b58 0xc000bb6ed0 0xc000bb71b0] [0xc000bb6b58 0xc000bb6ed0 0xc000bb71b0] [0xc000bb6d98 0xc000bb7140] [0xba6c50 0xba6c50] 0xc002844fc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 15 14:38:04.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 15 14:38:04.171: INFO: rc: 1
Dec 15 14:38:04.172: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0014b1410 exit status 1   true [0xc000b5e030 0xc000b5e6f8 0xc000b5e960] [0xc000b5e030 0xc000b5e6f8 0xc000b5e960] [0xc000b5e318 0xc000b5e7b8] [0xba6c50 0xba6c50] 0xc002d16540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 15 14:38:14.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 15 14:38:14.298: INFO: rc: 1
Dec 15 14:38:14.298: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0024ee270 exit status 1   true [0xc00035db88 0xc00035dc40 0xc00035ddf8] [0xc00035db88 0xc00035dc40 0xc00035ddf8] [0xc00035dbd0 0xc00035ddd8] [0xba6c50 0xba6c50] 0xc002d35da0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 15 14:38:24.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 15 14:38:24.553: INFO: rc: 1
Dec 15 14:38:24.553: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00302a330 exit status 1   true [0xc000bb72f8 0xc000bb7798 0xc000bb7958] [0xc000bb72f8 0xc000bb7798 0xc000bb7958] [0xc000bb7568 0xc000bb78e8] [0xba6c50 0xba6c50] 0xc002845620 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 15 14:38:34.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 15 14:38:34.696: INFO: rc: 1
Dec 15 14:38:34.696: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0014b14d0 exit status 1   true [0xc000b5ec28 0xc000b5ee60 0xc000b5f058] [0xc000b5ec28 0xc000b5ee60 0xc000b5f058] [0xc000b5ee28 0xc000b5ef28] [0xba6c50 0xba6c50] 0xc002d16d20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 15 14:38:44.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 15 14:38:44.865: INFO: rc: 1
Dec 15 14:38:44.865: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0024ee390 exit status 1   true [0xc00035de48 0xc00035df50 0xc0001894e8] [0xc00035de48 0xc00035df50 0xc0001894e8] [0xc00035df00 0xc00035dfb8] [0xba6c50 0xba6c50] 0xc002990300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 15 14:38:54.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 15 14:38:55.041: INFO: rc: 1
Dec 15 14:38:55.041: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0018c00c0 exit status 1   true [0xc00035d8c0 0xc00035da00 0xc00035daf8] [0xc00035d8c0 0xc00035da00 0xc00035daf8] [0xc00035d9a8 0xc00035dac0] [0xba6c50 0xba6c50] 0xc002d34780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 15 14:39:05.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 15 14:39:05.250: INFO: rc: 1
Dec 15 14:39:05.250: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0018c0180 exit status 1   true [0xc00035db08 0xc00035db20 0xc00035db88] [0xc00035db08 0xc00035db20 0xc00035db88] [0xc00035db18 0xc00035db68] [0xba6c50 0xba6c50] 0xc002d34f60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 15 14:39:15.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 15 14:39:15.424: INFO: rc: 1
Dec 15 14:39:15.424: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0024ee0c0 exit status 1   true [0xc00137c008 0xc00137c050 0xc00137c090] [0xc00137c008 0xc00137c050 0xc00137c090] [0xc00137c030 0xc00137c088] [0xba6c50 0xba6c50] 0xc002536300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 15 14:39:25.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 15 14:39:25.603: INFO: rc: 1
Dec 15 14:39:25.603: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0024ee1b0 exit status 1   true [0xc00137c098 0xc00137c0b0 0xc00137c0c8] [0xc00137c098 0xc00137c0b0 0xc00137c0c8] [0xc00137c0a8 0xc00137c0c0] [0xba6c50 0xba6c50] 0xc002536660 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 15 14:39:35.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 15 14:39:35.765: INFO: rc: 1
Dec 15 14:39:35.766: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0014b1440 exit status 1   true [0xc000bb6000 0xc000bb61a0 0xc000bb6448] [0xc000bb6000 0xc000bb61a0 0xc000bb6448] [0xc000bb6190 0xc000bb6420] [0xba6c50 0xba6c50] 0xc0028444e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 15 14:39:45.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 15 14:39:46.007: INFO: rc: 1
Dec 15 14:39:46.007: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00302a120 exit status 1   true [0xc000b5e030 0xc000b5e6f8 0xc000b5e960] [0xc000b5e030 0xc000b5e6f8 0xc000b5e960] [0xc000b5e318 0xc000b5e7b8] [0xba6c50 0xba6c50] 0xc002d16420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 15 14:39:56.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 15 14:39:56.181: INFO: rc: 1
Dec 15 14:39:56.181: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00302a240 exit status 1   true [0xc000b5ec28 0xc000b5ee60 0xc000b5f058] [0xc000b5ec28 0xc000b5ee60 0xc000b5f058] [0xc000b5ee28 0xc000b5ef28] [0xba6c50 0xba6c50] 0xc002d16c00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 15 14:40:06.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 15 14:40:06.342: INFO: rc: 1
Dec 15 14:40:06.343: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0024ee2d0 exit status 1   true [0xc00137c0d8 0xc00137c0f0 0xc00137c108] [0xc00137c0d8 0xc00137c0f0 0xc00137c108] [0xc00137c0e8 0xc00137c100] [0xba6c50 0xba6c50] 0xc002536b40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 15 14:40:16.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 15 14:40:16.501: INFO: rc: 1
Dec 15 14:40:16.502: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00302a360 exit status 1   true [0xc000b5f120 0xc000b5f1f8 0xc000b5f420] [0xc000b5f120 0xc000b5f1f8 0xc000b5f420] [0xc000b5f1c0 0xc000b5f320] [0xba6c50 0xba6c50] 0xc002d170e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 15 14:40:26.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 15 14:40:26.656: INFO: rc: 1
Dec 15 14:40:26.656: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00302a450 exit status 1   true [0xc000b5f450 0xc000b5f5f0 0xc000b5f778] [0xc000b5f450 0xc000b5f5f0 0xc000b5f778] [0xc000b5f528 0xc000b5f690] [0xba6c50 0xba6c50] 0xc002d175c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 15 14:40:36.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 15 14:40:36.861: INFO: rc: 1
Dec 15 14:40:36.862: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0014b1590 exit status 1   true [0xc000bb6470 0xc000bb66c8 0xc000bb6b10] [0xc000bb6470 0xc000bb66c8 0xc000bb6b10] [0xc000bb66a8 0xc000bb6900] [0xba6c50 0xba6c50] 0xc002844900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 15 14:40:46.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6361 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 15 14:40:47.083: INFO: rc: 1
Dec 15 14:40:47.083: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Dec 15 14:40:47.083: INFO: Scaling statefulset ss to 0
Dec 15 14:40:47.094: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 15 14:40:47.096: INFO: Deleting all statefulset in ns statefulset-6361
Dec 15 14:40:47.105: INFO: Scaling statefulset ss to 0
Dec 15 14:40:47.301: INFO: Waiting for statefulset status.replicas updated to 0
Dec 15 14:40:47.305: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:40:47.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6361" for this suite.
Dec 15 14:40:55.479: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:40:55.576: INFO: namespace statefulset-6361 deletion completed in 8.247482376s

• [SLOW TEST:392.771 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:40:55.576: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 15 14:40:55.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Dec 15 14:40:55.898: INFO: stderr: ""
Dec 15 14:40:55.898: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-14T21:37:43Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:40:55.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-275" for this suite.
Dec 15 14:41:04.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:41:04.127: INFO: namespace kubectl-275 deletion completed in 8.216375592s

• [SLOW TEST:8.551 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:41:04.128: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-5f5118dd-db56-4da9-8046-7641a96f9a0e
STEP: Creating a pod to test consume secrets
Dec 15 14:41:04.415: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-eef8ce96-a7d6-45f5-9a66-36cbdee62f91" in namespace "projected-8233" to be "success or failure"
Dec 15 14:41:04.426: INFO: Pod "pod-projected-secrets-eef8ce96-a7d6-45f5-9a66-36cbdee62f91": Phase="Pending", Reason="", readiness=false. Elapsed: 10.685086ms
Dec 15 14:41:06.784: INFO: Pod "pod-projected-secrets-eef8ce96-a7d6-45f5-9a66-36cbdee62f91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.36905259s
Dec 15 14:41:08.794: INFO: Pod "pod-projected-secrets-eef8ce96-a7d6-45f5-9a66-36cbdee62f91": Phase="Pending", Reason="", readiness=false. Elapsed: 4.379120877s
Dec 15 14:41:10.840: INFO: Pod "pod-projected-secrets-eef8ce96-a7d6-45f5-9a66-36cbdee62f91": Phase="Pending", Reason="", readiness=false. Elapsed: 6.424915796s
Dec 15 14:41:12.856: INFO: Pod "pod-projected-secrets-eef8ce96-a7d6-45f5-9a66-36cbdee62f91": Phase="Pending", Reason="", readiness=false. Elapsed: 8.4414263s
Dec 15 14:41:14.865: INFO: Pod "pod-projected-secrets-eef8ce96-a7d6-45f5-9a66-36cbdee62f91": Phase="Pending", Reason="", readiness=false. Elapsed: 10.449570194s
Dec 15 14:41:16.910: INFO: Pod "pod-projected-secrets-eef8ce96-a7d6-45f5-9a66-36cbdee62f91": Phase="Pending", Reason="", readiness=false. Elapsed: 12.495505068s
Dec 15 14:41:18.946: INFO: Pod "pod-projected-secrets-eef8ce96-a7d6-45f5-9a66-36cbdee62f91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.531068569s
STEP: Saw pod success
Dec 15 14:41:18.946: INFO: Pod "pod-projected-secrets-eef8ce96-a7d6-45f5-9a66-36cbdee62f91" satisfied condition "success or failure"
Dec 15 14:41:19.030: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-eef8ce96-a7d6-45f5-9a66-36cbdee62f91 container projected-secret-volume-test: 
STEP: delete the pod
Dec 15 14:41:19.204: INFO: Waiting for pod pod-projected-secrets-eef8ce96-a7d6-45f5-9a66-36cbdee62f91 to disappear
Dec 15 14:41:19.233: INFO: Pod pod-projected-secrets-eef8ce96-a7d6-45f5-9a66-36cbdee62f91 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:41:19.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8233" for this suite.
Dec 15 14:41:25.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:41:25.433: INFO: namespace projected-8233 deletion completed in 6.169010203s

• [SLOW TEST:21.305 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:41:25.433: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Dec 15 14:41:39.818: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-e1aa0923-b683-4a13-ab66-0b36d17facb2,GenerateName:,Namespace:events-8226,SelfLink:/api/v1/namespaces/events-8226/pods/send-events-e1aa0923-b683-4a13-ab66-0b36d17facb2,UID:4fdc9124-1a57-40f3-8b3a-4d0581e854f0,ResourceVersion:16773493,Generation:0,CreationTimestamp:2019-12-15 14:41:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 606580444,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tx94k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tx94k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-tx94k true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0023bcf40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0023bcf60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:41:25 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:41:38 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:41:38 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:41:25 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2019-12-15 14:41:25 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2019-12-15 14:41:37 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://8c7a63b821dc97f84a96ff6190398e70c7ff4995ca1bca8afdc850151e37d87e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Dec 15 14:41:41.835: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Dec 15 14:41:43.851: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:41:43.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-8226" for this suite.
Dec 15 14:42:23.926: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:42:24.094: INFO: namespace events-8226 deletion completed in 40.203775642s

• [SLOW TEST:58.661 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:42:24.095: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-564dd07f-5e46-4d84-a5ac-3f919c6fd704
STEP: Creating a pod to test consume secrets
Dec 15 14:42:24.307: INFO: Waiting up to 5m0s for pod "pod-secrets-501c56b2-e8c9-4829-b497-8bffe40ae596" in namespace "secrets-5673" to be "success or failure"
Dec 15 14:42:24.584: INFO: Pod "pod-secrets-501c56b2-e8c9-4829-b497-8bffe40ae596": Phase="Pending", Reason="", readiness=false. Elapsed: 275.888096ms
Dec 15 14:42:26.606: INFO: Pod "pod-secrets-501c56b2-e8c9-4829-b497-8bffe40ae596": Phase="Pending", Reason="", readiness=false. Elapsed: 2.298359981s
Dec 15 14:42:29.128: INFO: Pod "pod-secrets-501c56b2-e8c9-4829-b497-8bffe40ae596": Phase="Pending", Reason="", readiness=false. Elapsed: 4.819849837s
Dec 15 14:42:31.779: INFO: Pod "pod-secrets-501c56b2-e8c9-4829-b497-8bffe40ae596": Phase="Pending", Reason="", readiness=false. Elapsed: 7.471302468s
Dec 15 14:42:33.791: INFO: Pod "pod-secrets-501c56b2-e8c9-4829-b497-8bffe40ae596": Phase="Pending", Reason="", readiness=false. Elapsed: 9.483513298s
Dec 15 14:42:35.807: INFO: Pod "pod-secrets-501c56b2-e8c9-4829-b497-8bffe40ae596": Phase="Pending", Reason="", readiness=false. Elapsed: 11.499438173s
Dec 15 14:42:37.822: INFO: Pod "pod-secrets-501c56b2-e8c9-4829-b497-8bffe40ae596": Phase="Pending", Reason="", readiness=false. Elapsed: 13.514137251s
Dec 15 14:42:39.849: INFO: Pod "pod-secrets-501c56b2-e8c9-4829-b497-8bffe40ae596": Phase="Pending", Reason="", readiness=false. Elapsed: 15.541404818s
Dec 15 14:42:42.395: INFO: Pod "pod-secrets-501c56b2-e8c9-4829-b497-8bffe40ae596": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.087148682s
STEP: Saw pod success
Dec 15 14:42:42.395: INFO: Pod "pod-secrets-501c56b2-e8c9-4829-b497-8bffe40ae596" satisfied condition "success or failure"
Dec 15 14:42:42.405: INFO: Trying to get logs from node iruya-node pod pod-secrets-501c56b2-e8c9-4829-b497-8bffe40ae596 container secret-volume-test: 
STEP: delete the pod
Dec 15 14:42:42.803: INFO: Waiting for pod pod-secrets-501c56b2-e8c9-4829-b497-8bffe40ae596 to disappear
Dec 15 14:42:42.827: INFO: Pod pod-secrets-501c56b2-e8c9-4829-b497-8bffe40ae596 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:42:42.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5673" for this suite.
Dec 15 14:42:49.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:42:49.086: INFO: namespace secrets-5673 deletion completed in 6.252152099s

• [SLOW TEST:24.991 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:42:49.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 15 14:42:49.270: INFO: Waiting up to 5m0s for pod "pod-dcb58cf9-30aa-4b54-93ff-1483d063ff67" in namespace "emptydir-9638" to be "success or failure"
Dec 15 14:42:49.301: INFO: Pod "pod-dcb58cf9-30aa-4b54-93ff-1483d063ff67": Phase="Pending", Reason="", readiness=false. Elapsed: 30.039432ms
Dec 15 14:42:51.313: INFO: Pod "pod-dcb58cf9-30aa-4b54-93ff-1483d063ff67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042647686s
Dec 15 14:42:53.627: INFO: Pod "pod-dcb58cf9-30aa-4b54-93ff-1483d063ff67": Phase="Pending", Reason="", readiness=false. Elapsed: 4.356206196s
Dec 15 14:42:55.634: INFO: Pod "pod-dcb58cf9-30aa-4b54-93ff-1483d063ff67": Phase="Pending", Reason="", readiness=false. Elapsed: 6.363907411s
Dec 15 14:42:57.644: INFO: Pod "pod-dcb58cf9-30aa-4b54-93ff-1483d063ff67": Phase="Pending", Reason="", readiness=false. Elapsed: 8.373491362s
Dec 15 14:42:59.653: INFO: Pod "pod-dcb58cf9-30aa-4b54-93ff-1483d063ff67": Phase="Pending", Reason="", readiness=false. Elapsed: 10.382154335s
Dec 15 14:43:01.662: INFO: Pod "pod-dcb58cf9-30aa-4b54-93ff-1483d063ff67": Phase="Pending", Reason="", readiness=false. Elapsed: 12.391201895s
Dec 15 14:43:03.668: INFO: Pod "pod-dcb58cf9-30aa-4b54-93ff-1483d063ff67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.39728818s
STEP: Saw pod success
Dec 15 14:43:03.668: INFO: Pod "pod-dcb58cf9-30aa-4b54-93ff-1483d063ff67" satisfied condition "success or failure"
Dec 15 14:43:03.671: INFO: Trying to get logs from node iruya-node pod pod-dcb58cf9-30aa-4b54-93ff-1483d063ff67 container test-container: 
STEP: delete the pod
Dec 15 14:43:03.744: INFO: Waiting for pod pod-dcb58cf9-30aa-4b54-93ff-1483d063ff67 to disappear
Dec 15 14:43:03.747: INFO: Pod pod-dcb58cf9-30aa-4b54-93ff-1483d063ff67 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:43:03.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9638" for this suite.
Dec 15 14:43:09.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:43:10.020: INFO: namespace emptydir-9638 deletion completed in 6.267180003s

• [SLOW TEST:20.933 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:43:10.020: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Dec 15 14:43:10.170: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 15 14:43:10.212: INFO: Waiting for terminating namespaces to be deleted...
Dec 15 14:43:10.219: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Dec 15 14:43:10.237: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Dec 15 14:43:10.237: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 15 14:43:10.237: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Dec 15 14:43:10.237: INFO: 	Container weave ready: true, restart count 0
Dec 15 14:43:10.237: INFO: 	Container weave-npc ready: true, restart count 0
Dec 15 14:43:10.237: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Dec 15 14:43:10.299: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Dec 15 14:43:10.299: INFO: 	Container etcd ready: true, restart count 0
Dec 15 14:43:10.299: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Dec 15 14:43:10.299: INFO: 	Container weave ready: true, restart count 0
Dec 15 14:43:10.299: INFO: 	Container weave-npc ready: true, restart count 0
Dec 15 14:43:10.299: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 15 14:43:10.299: INFO: 	Container coredns ready: true, restart count 0
Dec 15 14:43:10.299: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Dec 15 14:43:10.299: INFO: 	Container kube-controller-manager ready: true, restart count 10
Dec 15 14:43:10.299: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Dec 15 14:43:10.299: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 15 14:43:10.299: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Dec 15 14:43:10.299: INFO: 	Container kube-apiserver ready: true, restart count 0
Dec 15 14:43:10.299: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Dec 15 14:43:10.299: INFO: 	Container kube-scheduler ready: true, restart count 7
Dec 15 14:43:10.299: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 15 14:43:10.299: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15e09289d9b910e1], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:43:11.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5482" for this suite.
Dec 15 14:43:17.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:43:17.765: INFO: namespace sched-pred-5482 deletion completed in 6.384071525s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.745 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:43:17.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 15 14:43:17.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1916'
Dec 15 14:43:21.148: INFO: stderr: ""
Dec 15 14:43:21.149: INFO: stdout: "replicationcontroller/redis-master created\n"
Dec 15 14:43:21.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1916'
Dec 15 14:43:21.950: INFO: stderr: ""
Dec 15 14:43:21.950: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 15 14:43:22.958: INFO: Selector matched 1 pods for map[app:redis]
Dec 15 14:43:22.958: INFO: Found 0 / 1
Dec 15 14:43:23.973: INFO: Selector matched 1 pods for map[app:redis]
Dec 15 14:43:23.973: INFO: Found 0 / 1
Dec 15 14:43:24.961: INFO: Selector matched 1 pods for map[app:redis]
Dec 15 14:43:24.961: INFO: Found 0 / 1
Dec 15 14:43:25.964: INFO: Selector matched 1 pods for map[app:redis]
Dec 15 14:43:25.964: INFO: Found 0 / 1
Dec 15 14:43:26.958: INFO: Selector matched 1 pods for map[app:redis]
Dec 15 14:43:26.958: INFO: Found 0 / 1
Dec 15 14:43:27.964: INFO: Selector matched 1 pods for map[app:redis]
Dec 15 14:43:27.964: INFO: Found 0 / 1
Dec 15 14:43:28.963: INFO: Selector matched 1 pods for map[app:redis]
Dec 15 14:43:28.963: INFO: Found 0 / 1
Dec 15 14:43:29.969: INFO: Selector matched 1 pods for map[app:redis]
Dec 15 14:43:29.969: INFO: Found 0 / 1
Dec 15 14:43:30.960: INFO: Selector matched 1 pods for map[app:redis]
Dec 15 14:43:30.960: INFO: Found 0 / 1
Dec 15 14:43:31.964: INFO: Selector matched 1 pods for map[app:redis]
Dec 15 14:43:31.964: INFO: Found 0 / 1
Dec 15 14:43:32.959: INFO: Selector matched 1 pods for map[app:redis]
Dec 15 14:43:32.959: INFO: Found 0 / 1
Dec 15 14:43:33.970: INFO: Selector matched 1 pods for map[app:redis]
Dec 15 14:43:33.970: INFO: Found 0 / 1
Dec 15 14:43:34.965: INFO: Selector matched 1 pods for map[app:redis]
Dec 15 14:43:34.965: INFO: Found 1 / 1
Dec 15 14:43:34.965: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 15 14:43:34.970: INFO: Selector matched 1 pods for map[app:redis]
Dec 15 14:43:34.970: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 15 14:43:34.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-tsshh --namespace=kubectl-1916'
Dec 15 14:43:35.137: INFO: stderr: ""
Dec 15 14:43:35.137: INFO: stdout: "Name:           redis-master-tsshh\nNamespace:      kubectl-1916\nPriority:       0\nNode:           iruya-node/10.96.3.65\nStart Time:     Sun, 15 Dec 2019 14:43:21 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.44.0.1\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://6ea705791973fe9b90e4e6262ae2c3a830121930a3b9f68ad2e6baa27a2e871a\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sun, 15 Dec 2019 14:43:33 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5z9js (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-5z9js:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-5z9js\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                 Message\n  ----    ------     ----  ----                 -------\n  Normal  Scheduled  14s   default-scheduler    Successfully assigned kubectl-1916/redis-master-tsshh to iruya-node\n  Normal  Pulled     8s    kubelet, iruya-node  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    3s    kubelet, iruya-node  Created container redis-master\n  Normal  Started    2s    kubelet, iruya-node  Started container redis-master\n"
Dec 15 14:43:35.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-1916'
Dec 15 14:43:35.278: INFO: stderr: ""
Dec 15 14:43:35.278: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-1916\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  14s   replication-controller  Created pod: redis-master-tsshh\n"
Dec 15 14:43:35.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-1916'
Dec 15 14:43:35.458: INFO: stderr: ""
Dec 15 14:43:35.458: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-1916\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.100.212.199\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Dec 15 14:43:35.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node'
Dec 15 14:43:35.646: INFO: stderr: ""
Dec 15 14:43:35.646: INFO: stdout: "Name:               iruya-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 04 Aug 2019 09:01:39 +0000\nTaints:             \nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 12 Oct 2019 11:56:49 +0000   Sat, 12 Oct 2019 11:56:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Sun, 15 Dec 2019 14:43:27 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Sun, 15 Dec 2019 14:43:27 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Sun, 15 Dec 2019 14:43:27 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Sun, 15 Dec 2019 14:43:27 +0000   Sun, 04 Aug 2019 09:02:19 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.3.65\n  Hostname:    iruya-node\nCapacity:\n cpu:                4\n ephemeral-storage:  20145724Ki\n hugepages-2Mi:      0\n memory:             4039076Ki\n pods:               110\nAllocatable:\n cpu:                4\n ephemeral-storage:  18566299208\n hugepages-2Mi:      0\n memory:             3936676Ki\n pods:               110\nSystem Info:\n Machine ID:                 f573dcf04d6f4a87856a35d266a2fa7a\n System UUID:                F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID:                    8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version:             4.15.0-52-generic\n OS Image:                   Ubuntu 18.04.2 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.9.7\n Kubelet Version:            v1.15.1\n Kube-Proxy Version:         v1.15.1\nPodCIDR:                     10.96.1.0/24\nNon-terminated Pods:         (3 in total)\n  Namespace                  Name                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                  ------------  ----------  ---------------  -------------  ---\n  kube-system                kube-proxy-976zl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         133d\n  kube-system                weave-net-rlp57       20m (0%)      0 (0%)      0 (0%)           0 (0%)         64d\n  kubectl-1916               redis-master-tsshh    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Dec 15 14:43:35.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-1916'
Dec 15 14:43:35.747: INFO: stderr: ""
Dec 15 14:43:35.747: INFO: stdout: "Name:         kubectl-1916\nLabels:       e2e-framework=kubectl\n              e2e-run=d05d2190-cb43-4fed-bf15-7846e176820e\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:43:35.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1916" for this suite.
Dec 15 14:43:57.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:43:57.917: INFO: namespace kubectl-1916 deletion completed in 22.16717189s

• [SLOW TEST:40.151 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:43:57.918: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 15 14:43:58.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-5541'
Dec 15 14:43:58.197: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 15 14:43:58.197: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Dec 15 14:43:58.205: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Dec 15 14:43:58.219: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Dec 15 14:43:58.403: INFO: scanned /root for discovery docs: 
Dec 15 14:43:58.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-5541'
Dec 15 14:44:25.822: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 15 14:44:25.822: INFO: stdout: "Created e2e-test-nginx-rc-1aa2bc796329159f214a8827f35c1a3b\nScaling up e2e-test-nginx-rc-1aa2bc796329159f214a8827f35c1a3b from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-1aa2bc796329159f214a8827f35c1a3b up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-1aa2bc796329159f214a8827f35c1a3b to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Dec 15 14:44:25.822: INFO: stdout: "Created e2e-test-nginx-rc-1aa2bc796329159f214a8827f35c1a3b\nScaling up e2e-test-nginx-rc-1aa2bc796329159f214a8827f35c1a3b from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-1aa2bc796329159f214a8827f35c1a3b up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-1aa2bc796329159f214a8827f35c1a3b to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Dec 15 14:44:25.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5541'
Dec 15 14:44:26.040: INFO: stderr: ""
Dec 15 14:44:26.040: INFO: stdout: "e2e-test-nginx-rc-1aa2bc796329159f214a8827f35c1a3b-689rq e2e-test-nginx-rc-jgxfd "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Dec 15 14:44:31.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5541'
Dec 15 14:44:31.187: INFO: stderr: ""
Dec 15 14:44:31.187: INFO: stdout: "e2e-test-nginx-rc-1aa2bc796329159f214a8827f35c1a3b-689rq e2e-test-nginx-rc-jgxfd "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Dec 15 14:44:36.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5541'
Dec 15 14:44:36.344: INFO: stderr: ""
Dec 15 14:44:36.344: INFO: stdout: "e2e-test-nginx-rc-1aa2bc796329159f214a8827f35c1a3b-689rq e2e-test-nginx-rc-jgxfd "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Dec 15 14:44:41.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5541'
Dec 15 14:44:41.524: INFO: stderr: ""
Dec 15 14:44:41.524: INFO: stdout: "e2e-test-nginx-rc-1aa2bc796329159f214a8827f35c1a3b-689rq e2e-test-nginx-rc-jgxfd "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Dec 15 14:44:46.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5541'
Dec 15 14:44:46.655: INFO: stderr: ""
Dec 15 14:44:46.655: INFO: stdout: "e2e-test-nginx-rc-1aa2bc796329159f214a8827f35c1a3b-689rq e2e-test-nginx-rc-jgxfd "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Dec 15 14:44:51.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5541'
Dec 15 14:44:51.877: INFO: stderr: ""
Dec 15 14:44:51.877: INFO: stdout: "e2e-test-nginx-rc-1aa2bc796329159f214a8827f35c1a3b-689rq e2e-test-nginx-rc-jgxfd "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Dec 15 14:44:56.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5541'
Dec 15 14:44:57.004: INFO: stderr: ""
Dec 15 14:44:57.004: INFO: stdout: "e2e-test-nginx-rc-1aa2bc796329159f214a8827f35c1a3b-689rq e2e-test-nginx-rc-jgxfd "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Dec 15 14:45:02.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5541'
Dec 15 14:45:02.218: INFO: stderr: ""
Dec 15 14:45:02.219: INFO: stdout: "e2e-test-nginx-rc-1aa2bc796329159f214a8827f35c1a3b-689rq e2e-test-nginx-rc-jgxfd "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Dec 15 14:45:07.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5541'
Dec 15 14:45:07.375: INFO: stderr: ""
Dec 15 14:45:07.375: INFO: stdout: "e2e-test-nginx-rc-1aa2bc796329159f214a8827f35c1a3b-689rq e2e-test-nginx-rc-jgxfd "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Dec 15 14:45:12.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5541'
Dec 15 14:45:12.599: INFO: stderr: ""
Dec 15 14:45:12.600: INFO: stdout: "e2e-test-nginx-rc-1aa2bc796329159f214a8827f35c1a3b-689rq e2e-test-nginx-rc-jgxfd "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Dec 15 14:45:17.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5541'
Dec 15 14:45:17.855: INFO: stderr: ""
Dec 15 14:45:17.855: INFO: stdout: "e2e-test-nginx-rc-1aa2bc796329159f214a8827f35c1a3b-689rq e2e-test-nginx-rc-jgxfd "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Dec 15 14:45:22.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5541'
Dec 15 14:45:23.020: INFO: stderr: ""
Dec 15 14:45:23.020: INFO: stdout: "e2e-test-nginx-rc-1aa2bc796329159f214a8827f35c1a3b-689rq e2e-test-nginx-rc-jgxfd "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Dec 15 14:45:28.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5541'
Dec 15 14:45:28.174: INFO: stderr: ""
Dec 15 14:45:28.174: INFO: stdout: "e2e-test-nginx-rc-1aa2bc796329159f214a8827f35c1a3b-689rq e2e-test-nginx-rc-jgxfd "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Dec 15 14:45:33.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5541'
Dec 15 14:45:33.350: INFO: stderr: ""
Dec 15 14:45:33.350: INFO: stdout: "e2e-test-nginx-rc-1aa2bc796329159f214a8827f35c1a3b-689rq "
Dec 15 14:45:33.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-1aa2bc796329159f214a8827f35c1a3b-689rq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5541'
Dec 15 14:45:33.538: INFO: stderr: ""
Dec 15 14:45:33.538: INFO: stdout: "true"
Dec 15 14:45:33.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-1aa2bc796329159f214a8827f35c1a3b-689rq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5541'
Dec 15 14:45:33.665: INFO: stderr: ""
Dec 15 14:45:33.665: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Dec 15 14:45:33.665: INFO: e2e-test-nginx-rc-1aa2bc796329159f214a8827f35c1a3b-689rq is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Dec 15 14:45:33.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-5541'
Dec 15 14:45:33.829: INFO: stderr: ""
Dec 15 14:45:33.829: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:45:33.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5541" for this suite.
Dec 15 14:45:57.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:45:58.056: INFO: namespace kubectl-5541 deletion completed in 24.141408789s

• [SLOW TEST:120.138 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:45:58.057: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Dec 15 14:45:58.361: INFO: Waiting up to 5m0s for pod "var-expansion-91321c73-a393-4d8e-a906-f2bf4a6553d6" in namespace "var-expansion-2145" to be "success or failure"
Dec 15 14:45:58.385: INFO: Pod "var-expansion-91321c73-a393-4d8e-a906-f2bf4a6553d6": Phase="Pending", Reason="", readiness=false. Elapsed: 24.221816ms
Dec 15 14:46:00.397: INFO: Pod "var-expansion-91321c73-a393-4d8e-a906-f2bf4a6553d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036300589s
Dec 15 14:46:02.405: INFO: Pod "var-expansion-91321c73-a393-4d8e-a906-f2bf4a6553d6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044215615s
Dec 15 14:46:04.410: INFO: Pod "var-expansion-91321c73-a393-4d8e-a906-f2bf4a6553d6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049390878s
Dec 15 14:46:06.427: INFO: Pod "var-expansion-91321c73-a393-4d8e-a906-f2bf4a6553d6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066375027s
Dec 15 14:46:08.437: INFO: Pod "var-expansion-91321c73-a393-4d8e-a906-f2bf4a6553d6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.076169007s
Dec 15 14:46:10.447: INFO: Pod "var-expansion-91321c73-a393-4d8e-a906-f2bf4a6553d6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.08640545s
Dec 15 14:46:12.510: INFO: Pod "var-expansion-91321c73-a393-4d8e-a906-f2bf4a6553d6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.149134867s
Dec 15 14:46:14.584: INFO: Pod "var-expansion-91321c73-a393-4d8e-a906-f2bf4a6553d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.223102827s
STEP: Saw pod success
Dec 15 14:46:14.585: INFO: Pod "var-expansion-91321c73-a393-4d8e-a906-f2bf4a6553d6" satisfied condition "success or failure"
Dec 15 14:46:14.601: INFO: Trying to get logs from node iruya-node pod var-expansion-91321c73-a393-4d8e-a906-f2bf4a6553d6 container dapi-container: 
STEP: delete the pod
Dec 15 14:46:14.746: INFO: Waiting for pod var-expansion-91321c73-a393-4d8e-a906-f2bf4a6553d6 to disappear
Dec 15 14:46:14.766: INFO: Pod var-expansion-91321c73-a393-4d8e-a906-f2bf4a6553d6 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:46:14.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2145" for this suite.
Dec 15 14:46:20.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:46:20.934: INFO: namespace var-expansion-2145 deletion completed in 6.133622337s

• [SLOW TEST:22.877 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:46:20.934: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-3e1bd7a5-885e-41fb-acf7-9f3e0c48f119 in namespace container-probe-8226
Dec 15 14:46:35.165: INFO: Started pod busybox-3e1bd7a5-885e-41fb-acf7-9f3e0c48f119 in namespace container-probe-8226
STEP: checking the pod's current state and verifying that restartCount is present
Dec 15 14:46:35.170: INFO: Initial restart count of pod busybox-3e1bd7a5-885e-41fb-acf7-9f3e0c48f119 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:50:36.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8226" for this suite.
Dec 15 14:50:45.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:50:45.218: INFO: namespace container-probe-8226 deletion completed in 8.264928299s

• [SLOW TEST:264.284 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:50:45.219: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 15 14:50:45.439: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5c49b420-6d76-49e4-ac33-0bb06e8c9d1d" in namespace "downward-api-5734" to be "success or failure"
Dec 15 14:50:45.471: INFO: Pod "downwardapi-volume-5c49b420-6d76-49e4-ac33-0bb06e8c9d1d": Phase="Pending", Reason="", readiness=false. Elapsed: 31.806087ms
Dec 15 14:50:47.485: INFO: Pod "downwardapi-volume-5c49b420-6d76-49e4-ac33-0bb06e8c9d1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045833504s
Dec 15 14:50:49.741: INFO: Pod "downwardapi-volume-5c49b420-6d76-49e4-ac33-0bb06e8c9d1d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.301323943s
Dec 15 14:50:51.749: INFO: Pod "downwardapi-volume-5c49b420-6d76-49e4-ac33-0bb06e8c9d1d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.310052178s
Dec 15 14:50:53.756: INFO: Pod "downwardapi-volume-5c49b420-6d76-49e4-ac33-0bb06e8c9d1d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.31640999s
Dec 15 14:50:55.765: INFO: Pod "downwardapi-volume-5c49b420-6d76-49e4-ac33-0bb06e8c9d1d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.325611067s
Dec 15 14:50:57.785: INFO: Pod "downwardapi-volume-5c49b420-6d76-49e4-ac33-0bb06e8c9d1d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.345405064s
Dec 15 14:50:59.793: INFO: Pod "downwardapi-volume-5c49b420-6d76-49e4-ac33-0bb06e8c9d1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.354147078s
STEP: Saw pod success
Dec 15 14:50:59.794: INFO: Pod "downwardapi-volume-5c49b420-6d76-49e4-ac33-0bb06e8c9d1d" satisfied condition "success or failure"
Dec 15 14:50:59.800: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-5c49b420-6d76-49e4-ac33-0bb06e8c9d1d container client-container: 
STEP: delete the pod
Dec 15 14:50:59.878: INFO: Waiting for pod downwardapi-volume-5c49b420-6d76-49e4-ac33-0bb06e8c9d1d to disappear
Dec 15 14:50:59.883: INFO: Pod downwardapi-volume-5c49b420-6d76-49e4-ac33-0bb06e8c9d1d no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:50:59.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5734" for this suite.
Dec 15 14:51:06.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:51:06.130: INFO: namespace downward-api-5734 deletion completed in 6.242001097s

• [SLOW TEST:20.911 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:51:06.130: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:51:11.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2200" for this suite.
Dec 15 14:51:17.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:51:17.999: INFO: namespace watch-2200 deletion completed in 6.163118378s

• [SLOW TEST:11.870 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:51:18.001: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6032.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6032.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 15 14:51:40.410: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-6032/dns-test-9275048a-c4f8-40dc-8930-4b1fc4906acb: the server could not find the requested resource (get pods dns-test-9275048a-c4f8-40dc-8930-4b1fc4906acb)
Dec 15 14:51:40.416: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-6032/dns-test-9275048a-c4f8-40dc-8930-4b1fc4906acb: the server could not find the requested resource (get pods dns-test-9275048a-c4f8-40dc-8930-4b1fc4906acb)
Dec 15 14:51:40.423: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-6032/dns-test-9275048a-c4f8-40dc-8930-4b1fc4906acb: the server could not find the requested resource (get pods dns-test-9275048a-c4f8-40dc-8930-4b1fc4906acb)
Dec 15 14:51:40.433: INFO: Unable to read jessie_udp@PodARecord from pod dns-6032/dns-test-9275048a-c4f8-40dc-8930-4b1fc4906acb: the server could not find the requested resource (get pods dns-test-9275048a-c4f8-40dc-8930-4b1fc4906acb)
Dec 15 14:51:40.439: INFO: Unable to read jessie_tcp@PodARecord from pod dns-6032/dns-test-9275048a-c4f8-40dc-8930-4b1fc4906acb: the server could not find the requested resource (get pods dns-test-9275048a-c4f8-40dc-8930-4b1fc4906acb)
Dec 15 14:51:40.439: INFO: Lookups using dns-6032/dns-test-9275048a-c4f8-40dc-8930-4b1fc4906acb failed for: [wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Dec 15 14:51:45.526: INFO: DNS probes using dns-6032/dns-test-9275048a-c4f8-40dc-8930-4b1fc4906acb succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:51:45.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6032" for this suite.
Dec 15 14:51:51.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:51:51.927: INFO: namespace dns-6032 deletion completed in 6.252106358s

• [SLOW TEST:33.926 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:51:51.928: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Dec 15 14:52:06.323: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Dec 15 14:52:16.473: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:52:16.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2502" for this suite.
Dec 15 14:52:22.529: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:52:22.698: INFO: namespace pods-2502 deletion completed in 6.209227551s

• [SLOW TEST:30.771 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:52:22.699: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-b4966580-50f6-4d5b-bf38-c9283180a772
STEP: Creating a pod to test consume configMaps
Dec 15 14:52:23.049: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-923e820b-c3bb-4808-a9f8-c52ebb03be2d" in namespace "projected-1361" to be "success or failure"
Dec 15 14:52:23.066: INFO: Pod "pod-projected-configmaps-923e820b-c3bb-4808-a9f8-c52ebb03be2d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.692735ms
Dec 15 14:52:25.075: INFO: Pod "pod-projected-configmaps-923e820b-c3bb-4808-a9f8-c52ebb03be2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025319595s
Dec 15 14:52:27.087: INFO: Pod "pod-projected-configmaps-923e820b-c3bb-4808-a9f8-c52ebb03be2d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037275046s
Dec 15 14:52:29.113: INFO: Pod "pod-projected-configmaps-923e820b-c3bb-4808-a9f8-c52ebb03be2d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063602802s
Dec 15 14:52:31.123: INFO: Pod "pod-projected-configmaps-923e820b-c3bb-4808-a9f8-c52ebb03be2d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.073881806s
Dec 15 14:52:33.146: INFO: Pod "pod-projected-configmaps-923e820b-c3bb-4808-a9f8-c52ebb03be2d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.09685314s
Dec 15 14:52:35.158: INFO: Pod "pod-projected-configmaps-923e820b-c3bb-4808-a9f8-c52ebb03be2d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.108282526s
Dec 15 14:52:37.206: INFO: Pod "pod-projected-configmaps-923e820b-c3bb-4808-a9f8-c52ebb03be2d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.156658667s
Dec 15 14:52:39.223: INFO: Pod "pod-projected-configmaps-923e820b-c3bb-4808-a9f8-c52ebb03be2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.173952498s
STEP: Saw pod success
Dec 15 14:52:39.223: INFO: Pod "pod-projected-configmaps-923e820b-c3bb-4808-a9f8-c52ebb03be2d" satisfied condition "success or failure"
Dec 15 14:52:39.227: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-923e820b-c3bb-4808-a9f8-c52ebb03be2d container projected-configmap-volume-test: 
STEP: delete the pod
Dec 15 14:52:39.421: INFO: Waiting for pod pod-projected-configmaps-923e820b-c3bb-4808-a9f8-c52ebb03be2d to disappear
Dec 15 14:52:39.427: INFO: Pod pod-projected-configmaps-923e820b-c3bb-4808-a9f8-c52ebb03be2d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:52:39.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1361" for this suite.
Dec 15 14:52:45.505: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:52:45.629: INFO: namespace projected-1361 deletion completed in 6.195267129s

• [SLOW TEST:22.931 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:52:45.631: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W1215 14:52:59.845016       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 15 14:52:59.845: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:52:59.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5681" for this suite.
Dec 15 14:53:27.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:53:36.641: INFO: namespace gc-5681 deletion completed in 36.792103006s

• [SLOW TEST:51.011 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:53:36.642: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Dec 15 14:53:38.387: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 15 14:53:38.478: INFO: Waiting for terminating namespaces to be deleted...
Dec 15 14:53:39.350: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Dec 15 14:53:39.375: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Dec 15 14:53:39.375: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 15 14:53:39.375: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Dec 15 14:53:39.375: INFO: 	Container weave ready: true, restart count 0
Dec 15 14:53:39.375: INFO: 	Container weave-npc ready: true, restart count 0
Dec 15 14:53:39.375: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Dec 15 14:53:39.389: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Dec 15 14:53:39.389: INFO: 	Container kube-controller-manager ready: true, restart count 10
Dec 15 14:53:39.389: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Dec 15 14:53:39.389: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 15 14:53:39.389: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Dec 15 14:53:39.389: INFO: 	Container kube-apiserver ready: true, restart count 0
Dec 15 14:53:39.389: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Dec 15 14:53:39.389: INFO: 	Container kube-scheduler ready: true, restart count 7
Dec 15 14:53:39.389: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 15 14:53:39.389: INFO: 	Container coredns ready: true, restart count 0
Dec 15 14:53:39.389: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Dec 15 14:53:39.389: INFO: 	Container etcd ready: true, restart count 0
Dec 15 14:53:39.389: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Dec 15 14:53:39.389: INFO: 	Container weave ready: true, restart count 0
Dec 15 14:53:39.389: INFO: 	Container weave-npc ready: true, restart count 0
Dec 15 14:53:39.389: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 15 14:53:39.389: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-77ea9120-b752-4fc7-8347-8fda78921e2b 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-77ea9120-b752-4fc7-8347-8fda78921e2b off the node iruya-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-77ea9120-b752-4fc7-8347-8fda78921e2b
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:54:16.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-7275" for this suite.
Dec 15 14:54:46.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:54:46.487: INFO: namespace sched-pred-7275 deletion completed in 30.307186032s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:69.845 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:54:46.488: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 15 14:54:46.670: INFO: Creating deployment "nginx-deployment"
Dec 15 14:54:46.683: INFO: Waiting for observed generation 1
Dec 15 14:54:51.733: INFO: Waiting for all required pods to come up
Dec 15 14:54:51.760: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Dec 15 14:55:37.130: INFO: Waiting for deployment "nginx-deployment" to complete
Dec 15 14:55:37.141: INFO: Updating deployment "nginx-deployment" with a non-existent image
Dec 15 14:55:37.154: INFO: Updating deployment nginx-deployment
Dec 15 14:55:37.154: INFO: Waiting for observed generation 2
Dec 15 14:55:40.152: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Dec 15 14:55:44.348: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Dec 15 14:55:44.364: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Dec 15 14:55:46.236: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Dec 15 14:55:46.236: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Dec 15 14:55:46.239: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Dec 15 14:55:46.254: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Dec 15 14:55:46.254: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Dec 15 14:55:46.265: INFO: Updating deployment nginx-deployment
Dec 15 14:55:46.265: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Dec 15 14:55:49.744: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Dec 15 14:55:56.684: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 15 14:55:58.789: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-2332,SelfLink:/apis/apps/v1/namespaces/deployment-2332/deployments/nginx-deployment,UID:ea75b85a-0dcb-4ad0-988a-7b5b95dff8bd,ResourceVersion:16775441,Generation:3,CreationTimestamp:2019-12-15 14:54:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:25,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2019-12-15 14:55:44 +0000 UTC 2019-12-15 14:54:46 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2019-12-15 14:55:49 +0000 UTC 2019-12-15 14:55:49 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Dec 15 14:56:00.719: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-2332,SelfLink:/apis/apps/v1/namespaces/deployment-2332/replicasets/nginx-deployment-55fb7cb77f,UID:1957f353-1ff7-4869-850d-5d85496bd33d,ResourceVersion:16775447,Generation:3,CreationTimestamp:2019-12-15 14:55:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment ea75b85a-0dcb-4ad0-988a-7b5b95dff8bd 0xc00140e337 0xc00140e338}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 15 14:56:00.719: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Dec 15 14:56:00.719: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-2332,SelfLink:/apis/apps/v1/namespaces/deployment-2332/replicasets/nginx-deployment-7b8c6f4498,UID:20676979-3901-4295-90c4-34d99cbc20f6,ResourceVersion:16775435,Generation:3,CreationTimestamp:2019-12-15 14:54:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment ea75b85a-0dcb-4ad0-988a-7b5b95dff8bd 0xc00140e407 0xc00140e408}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Dec 15 14:56:02.684: INFO: Pod "nginx-deployment-55fb7cb77f-28pzv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-28pzv,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2332,SelfLink:/api/v1/namespaces/deployment-2332/pods/nginx-deployment-55fb7cb77f-28pzv,UID:4e8361e1-73ce-45ca-87ab-773f1d71f2a0,ResourceVersion:16775357,Generation:0,CreationTimestamp:2019-12-15 14:55:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1957f353-1ff7-4869-850d-5d85496bd33d 0xc00140edd7 0xc00140edd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpddb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpddb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-jpddb true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00140ee40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00140ee60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:37 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-15 14:55:37 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 15 14:56:02.686: INFO: Pod "nginx-deployment-55fb7cb77f-2w5h5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-2w5h5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2332,SelfLink:/api/v1/namespaces/deployment-2332/pods/nginx-deployment-55fb7cb77f-2w5h5,UID:25b51d24-266b-4496-b2cf-417151062fc9,ResourceVersion:16775366,Generation:0,CreationTimestamp:2019-12-15 14:55:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1957f353-1ff7-4869-850d-5d85496bd33d 0xc00140ef47 0xc00140ef48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpddb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpddb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-jpddb true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00140efc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00140efe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:42 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-15 14:55:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 15 14:56:02.686: INFO: Pod "nginx-deployment-55fb7cb77f-5z252" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5z252,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2332,SelfLink:/api/v1/namespaces/deployment-2332/pods/nginx-deployment-55fb7cb77f-5z252,UID:a8484be7-5e09-4cc5-b238-d24207c230c1,ResourceVersion:16775419,Generation:0,CreationTimestamp:2019-12-15 14:55:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1957f353-1ff7-4869-850d-5d85496bd33d 0xc00140f0d7 0xc00140f0d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpddb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpddb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-jpddb true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00140f150} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00140f170}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:52 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 15 14:56:02.686: INFO: Pod "nginx-deployment-55fb7cb77f-85l6w" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-85l6w,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2332,SelfLink:/api/v1/namespaces/deployment-2332/pods/nginx-deployment-55fb7cb77f-85l6w,UID:47101a30-a8bc-4a60-83c8-1a4a9698e714,ResourceVersion:16775404,Generation:0,CreationTimestamp:2019-12-15 14:55:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1957f353-1ff7-4869-850d-5d85496bd33d 0xc00140f1f7 0xc00140f1f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpddb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpddb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-jpddb true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00140f280} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00140f2a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:51 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 15 14:56:02.686: INFO: Pod "nginx-deployment-55fb7cb77f-8wq6w" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8wq6w,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2332,SelfLink:/api/v1/namespaces/deployment-2332/pods/nginx-deployment-55fb7cb77f-8wq6w,UID:9f92c498-29e8-4f6a-9ddc-e31724c2e46a,ResourceVersion:16775451,Generation:0,CreationTimestamp:2019-12-15 14:55:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1957f353-1ff7-4869-850d-5d85496bd33d 0xc00140f327 0xc00140f328}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpddb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpddb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-jpddb true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00140f3a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00140f3c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:37 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.6,StartTime:2019-12-15 14:55:37 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = Error response from daemon: manifest for nginx:404 not found,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 15 14:56:02.687: INFO: Pod "nginx-deployment-55fb7cb77f-dkvwr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-dkvwr,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2332,SelfLink:/api/v1/namespaces/deployment-2332/pods/nginx-deployment-55fb7cb77f-dkvwr,UID:21be85eb-18a0-4376-baf6-c59879acdbfe,ResourceVersion:16775367,Generation:0,CreationTimestamp:2019-12-15 14:55:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1957f353-1ff7-4869-850d-5d85496bd33d 0xc00140f4c7 0xc00140f4c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpddb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpddb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-jpddb true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00140f540} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00140f560}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:41 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-15 14:55:42 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 15 14:56:02.687: INFO: Pod "nginx-deployment-55fb7cb77f-ff4m4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-ff4m4,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2332,SelfLink:/api/v1/namespaces/deployment-2332/pods/nginx-deployment-55fb7cb77f-ff4m4,UID:993bfdcb-cf34-4c7c-96e3-07e572509afa,ResourceVersion:16775420,Generation:0,CreationTimestamp:2019-12-15 14:55:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1957f353-1ff7-4869-850d-5d85496bd33d 0xc00140f637 0xc00140f638}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpddb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpddb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-jpddb true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00140f6b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00140f6d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:52 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 15 14:56:02.687: INFO: Pod "nginx-deployment-55fb7cb77f-lbhw9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-lbhw9,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2332,SelfLink:/api/v1/namespaces/deployment-2332/pods/nginx-deployment-55fb7cb77f-lbhw9,UID:14b360a9-9a01-4768-85a5-1eb5d7b9b1bf,ResourceVersion:16775403,Generation:0,CreationTimestamp:2019-12-15 14:55:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1957f353-1ff7-4869-850d-5d85496bd33d 0xc00140f757 0xc00140f758}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpddb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpddb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-jpddb true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00140f7d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00140f800}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:51 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 15 14:56:02.687: INFO: Pod "nginx-deployment-55fb7cb77f-lr9hs" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-lr9hs,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2332,SelfLink:/api/v1/namespaces/deployment-2332/pods/nginx-deployment-55fb7cb77f-lr9hs,UID:21da86c3-c55e-4d58-a3dc-f8a278359835,ResourceVersion:16775345,Generation:0,CreationTimestamp:2019-12-15 14:55:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1957f353-1ff7-4869-850d-5d85496bd33d 0xc00140f8d7 0xc00140f8d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpddb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpddb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-jpddb true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00140f950} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00140f970}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:37 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-15 14:55:37 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 15 14:56:02.687: INFO: Pod "nginx-deployment-55fb7cb77f-mfkcb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-mfkcb,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2332,SelfLink:/api/v1/namespaces/deployment-2332/pods/nginx-deployment-55fb7cb77f-mfkcb,UID:88c6140d-1f93-4b85-b849-892cb2017320,ResourceVersion:16775426,Generation:0,CreationTimestamp:2019-12-15 14:55:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1957f353-1ff7-4869-850d-5d85496bd33d 0xc00140fa67 0xc00140fa68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpddb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpddb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-jpddb true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00140fae0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00140fb10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:52 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 15 14:56:02.687: INFO: Pod "nginx-deployment-55fb7cb77f-mwpct" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-mwpct,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2332,SelfLink:/api/v1/namespaces/deployment-2332/pods/nginx-deployment-55fb7cb77f-mwpct,UID:88dee870-5dd4-4392-b04b-ce99bd47d0a1,ResourceVersion:16775418,Generation:0,CreationTimestamp:2019-12-15 14:55:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1957f353-1ff7-4869-850d-5d85496bd33d 0xc00140fb97 0xc00140fb98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpddb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpddb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-jpddb true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00140fc00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00140fc20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:52 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 15 14:56:02.687: INFO: Pod "nginx-deployment-55fb7cb77f-prbz6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-prbz6,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2332,SelfLink:/api/v1/namespaces/deployment-2332/pods/nginx-deployment-55fb7cb77f-prbz6,UID:04ed7752-409e-4067-9b86-da66f63d6eb7,ResourceVersion:16775452,Generation:0,CreationTimestamp:2019-12-15 14:55:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1957f353-1ff7-4869-850d-5d85496bd33d 0xc00140fce7 0xc00140fce8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpddb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpddb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-jpddb true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00140fd70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00140fd90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:50 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-15 14:55:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 15 14:56:02.688: INFO: Pod "nginx-deployment-55fb7cb77f-zgqrv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-zgqrv,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2332,SelfLink:/api/v1/namespaces/deployment-2332/pods/nginx-deployment-55fb7cb77f-zgqrv,UID:8ab259a5-100b-4d92-bb3a-4a059024a2a7,ResourceVersion:16775432,Generation:0,CreationTimestamp:2019-12-15 14:55:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1957f353-1ff7-4869-850d-5d85496bd33d 0xc00140fe67 0xc00140fe68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpddb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpddb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-jpddb true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00140fed0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00140fef0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 15 14:56:02.688: INFO: Pod "nginx-deployment-7b8c6f4498-2jm8r" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2jm8r,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2332,SelfLink:/api/v1/namespaces/deployment-2332/pods/nginx-deployment-7b8c6f4498-2jm8r,UID:2bf9fe45-7960-4ab9-8dfc-7ee138d2ade2,ResourceVersion:16775273,Generation:0,CreationTimestamp:2019-12-15 14:54:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 20676979-3901-4295-90c4-34d99cbc20f6 0xc00140ff77 0xc00140ff78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpddb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpddb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jpddb true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00140fff0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d7cbd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:54:47 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:32 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:54:46 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.4,StartTime:2019-12-15 14:54:47 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-15 14:55:31 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://18c97d5ef38dddf2fc4b107094f043dc4b3f811e549f67cd8405ac74f0a2731a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 15 14:56:02.688: INFO: Pod "nginx-deployment-7b8c6f4498-47qm9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-47qm9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2332,SelfLink:/api/v1/namespaces/deployment-2332/pods/nginx-deployment-7b8c6f4498-47qm9,UID:389f717d-b5c1-41be-a459-2ef1b2076024,ResourceVersion:16775406,Generation:0,CreationTimestamp:2019-12-15 14:55:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 20676979-3901-4295-90c4-34d99cbc20f6 0xc002d7cca7 0xc002d7cca8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpddb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpddb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jpddb true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d7cd20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d7cd40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:51 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 15 14:56:02.688: INFO: Pod "nginx-deployment-7b8c6f4498-5c2wp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5c2wp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2332,SelfLink:/api/v1/namespaces/deployment-2332/pods/nginx-deployment-7b8c6f4498-5c2wp,UID:9305e4ca-9805-4da0-a5ad-f15b327fecd1,ResourceVersion:16775424,Generation:0,CreationTimestamp:2019-12-15 14:55:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 20676979-3901-4295-90c4-34d99cbc20f6 0xc002d7cdc7 0xc002d7cdc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpddb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpddb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jpddb true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d7ce40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d7ce60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:52 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 15 14:56:02.689: INFO: Pod "nginx-deployment-7b8c6f4498-8nz6z" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8nz6z,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2332,SelfLink:/api/v1/namespaces/deployment-2332/pods/nginx-deployment-7b8c6f4498-8nz6z,UID:c6624f4f-01c8-4f14-99bc-3ec3ce55f1e0,ResourceVersion:16775280,Generation:0,CreationTimestamp:2019-12-15 14:54:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 20676979-3901-4295-90c4-34d99cbc20f6 0xc002d7cee7 0xc002d7cee8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpddb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpddb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jpddb true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d7cf70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d7cf90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:54:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:32 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:54:46 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2019-12-15 14:54:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-15 14:55:28 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://05559da663030d7d41c18039f6c6d205ac2e317961a7f989de858fa07d9603af}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 15 14:56:02.689: INFO: Pod "nginx-deployment-7b8c6f4498-bcbfg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bcbfg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2332,SelfLink:/api/v1/namespaces/deployment-2332/pods/nginx-deployment-7b8c6f4498-bcbfg,UID:4283999f-2dde-4bd9-aa48-cf07dc814425,ResourceVersion:16775411,Generation:0,CreationTimestamp:2019-12-15 14:55:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 20676979-3901-4295-90c4-34d99cbc20f6 0xc002d7d077 0xc002d7d078}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpddb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpddb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jpddb true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d7d0e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d7d100}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:49 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-15 14:55:51 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 15 14:56:02.689: INFO: Pod "nginx-deployment-7b8c6f4498-dqg98" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dqg98,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2332,SelfLink:/api/v1/namespaces/deployment-2332/pods/nginx-deployment-7b8c6f4498-dqg98,UID:3e0b9619-80c5-4fc1-8974-b0a69f2bac2e,ResourceVersion:16775422,Generation:0,CreationTimestamp:2019-12-15 14:55:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 20676979-3901-4295-90c4-34d99cbc20f6 0xc002d7d1e7 0xc002d7d1e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpddb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpddb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jpddb true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d7d260} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d7d280}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:52 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 15 14:56:02.689: INFO: Pod "nginx-deployment-7b8c6f4498-k5kl9" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-k5kl9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2332,SelfLink:/api/v1/namespaces/deployment-2332/pods/nginx-deployment-7b8c6f4498-k5kl9,UID:d829a9d9-b773-4232-977e-0b84f8d2ea56,ResourceVersion:16775274,Generation:0,CreationTimestamp:2019-12-15 14:54:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 20676979-3901-4295-90c4-34d99cbc20f6 0xc002d7d307 0xc002d7d308}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpddb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpddb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jpddb true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d7d370} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d7d390}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:54:47 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:33 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:33 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:54:46 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.5,StartTime:2019-12-15 14:54:47 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-15 14:55:32 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://8fdebaa3f2ee40a1ca18f214ab21b82a820e73e59849d110e35e44f8cb3b9aa7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 15 14:56:02.689: INFO: Pod "nginx-deployment-7b8c6f4498-mdmd8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mdmd8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2332,SelfLink:/api/v1/namespaces/deployment-2332/pods/nginx-deployment-7b8c6f4498-mdmd8,UID:969c42bc-66f4-48ab-8beb-21a6c73ac1e4,ResourceVersion:16775402,Generation:0,CreationTimestamp:2019-12-15 14:55:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 20676979-3901-4295-90c4-34d99cbc20f6 0xc002d7d467 0xc002d7d468}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpddb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpddb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jpddb true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d7d520} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d7d560}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:51 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 15 14:56:02.690: INFO: Pod "nginx-deployment-7b8c6f4498-nlwf5" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nlwf5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2332,SelfLink:/api/v1/namespaces/deployment-2332/pods/nginx-deployment-7b8c6f4498-nlwf5,UID:90aa47de-bb9e-4439-8b70-f8ff13907aa0,ResourceVersion:16775289,Generation:0,CreationTimestamp:2019-12-15 14:54:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 20676979-3901-4295-90c4-34d99cbc20f6 0xc002d7d5f7 0xc002d7d5f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpddb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpddb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jpddb true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d7d670} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d7d6b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:54:49 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:33 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:33 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:54:47 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2019-12-15 14:54:49 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-15 14:55:31 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://cf48d42bdb1c982d7f789e6b61cb111d84df1629848df40b302f4564bd062261}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 15 14:56:02.690: INFO: Pod "nginx-deployment-7b8c6f4498-r925r" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-r925r,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2332,SelfLink:/api/v1/namespaces/deployment-2332/pods/nginx-deployment-7b8c6f4498-r925r,UID:169199ea-671e-4477-a91a-47d15cbcb9c7,ResourceVersion:16775405,Generation:0,CreationTimestamp:2019-12-15 14:55:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 20676979-3901-4295-90c4-34d99cbc20f6 0xc002d7d8c7 0xc002d7d8c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpddb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpddb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jpddb true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d7d980} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d7d9b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:51 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 15 14:56:02.690: INFO: Pod "nginx-deployment-7b8c6f4498-rmpgs" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rmpgs,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2332,SelfLink:/api/v1/namespaces/deployment-2332/pods/nginx-deployment-7b8c6f4498-rmpgs,UID:9daa6e0d-b298-4ef7-91b7-516d042680b0,ResourceVersion:16775421,Generation:0,CreationTimestamp:2019-12-15 14:55:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 20676979-3901-4295-90c4-34d99cbc20f6 0xc002d7db27 0xc002d7db28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpddb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpddb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jpddb true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d7dbb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d7dbe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:52 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 15 14:56:02.690: INFO: Pod "nginx-deployment-7b8c6f4498-rzk98" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rzk98,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2332,SelfLink:/api/v1/namespaces/deployment-2332/pods/nginx-deployment-7b8c6f4498-rzk98,UID:e2093252-c81a-4212-b62c-e464e2ca2468,ResourceVersion:16775430,Generation:0,CreationTimestamp:2019-12-15 14:55:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 20676979-3901-4295-90c4-34d99cbc20f6 0xc002d7dc67 0xc002d7dc68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpddb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpddb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jpddb true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d7dce0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d7dd00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:50 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-15 14:55:51 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 15 14:56:02.691: INFO: Pod "nginx-deployment-7b8c6f4498-tps52" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tps52,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2332,SelfLink:/api/v1/namespaces/deployment-2332/pods/nginx-deployment-7b8c6f4498-tps52,UID:dee64b0e-b0dc-4f4e-aa61-72458183822d,ResourceVersion:16775437,Generation:0,CreationTimestamp:2019-12-15 14:55:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 20676979-3901-4295-90c4-34d99cbc20f6 0xc002d7de37 0xc002d7de38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpddb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpddb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jpddb true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d7dea0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d7dec0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:50 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-15 14:55:53 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 15 14:56:02.691: INFO: Pod "nginx-deployment-7b8c6f4498-trlcc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-trlcc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2332,SelfLink:/api/v1/namespaces/deployment-2332/pods/nginx-deployment-7b8c6f4498-trlcc,UID:11d4c449-ef2f-476c-aa31-265b71ff8ba8,ResourceVersion:16775407,Generation:0,CreationTimestamp:2019-12-15 14:55:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 20676979-3901-4295-90c4-34d99cbc20f6 0xc002d7dfd7 0xc002d7dfd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpddb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpddb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jpddb true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027ec060} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027ec080}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:51 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 15 14:56:02.691: INFO: Pod "nginx-deployment-7b8c6f4498-v7v7z" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-v7v7z,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2332,SelfLink:/api/v1/namespaces/deployment-2332/pods/nginx-deployment-7b8c6f4498-v7v7z,UID:9e470049-5051-4801-ac82-0b7fd594b2ec,ResourceVersion:16775292,Generation:0,CreationTimestamp:2019-12-15 14:54:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 20676979-3901-4295-90c4-34d99cbc20f6 0xc0027ec107 0xc0027ec108}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpddb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpddb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jpddb true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027ec180} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027ec1a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:54:47 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:33 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:33 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:54:46 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2019-12-15 14:54:47 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-15 14:55:31 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://c578dccc1059c12b999bac71a31b01f2787ba9f412e7215e07e5cca3a1a680d7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 15 14:56:02.691: INFO: Pod "nginx-deployment-7b8c6f4498-vh9xk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vh9xk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2332,SelfLink:/api/v1/namespaces/deployment-2332/pods/nginx-deployment-7b8c6f4498-vh9xk,UID:d785de95-b746-49f9-9f40-ad15382a763e,ResourceVersion:16775414,Generation:0,CreationTimestamp:2019-12-15 14:55:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 20676979-3901-4295-90c4-34d99cbc20f6 0xc0027ec277 0xc0027ec278}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpddb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpddb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jpddb true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027ec2e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027ec300}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:52 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 15 14:56:02.692: INFO: Pod "nginx-deployment-7b8c6f4498-vl82c" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vl82c,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2332,SelfLink:/api/v1/namespaces/deployment-2332/pods/nginx-deployment-7b8c6f4498-vl82c,UID:013af58b-ff28-492e-8ab7-d52ecbd4bfa1,ResourceVersion:16775284,Generation:0,CreationTimestamp:2019-12-15 14:54:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 20676979-3901-4295-90c4-34d99cbc20f6 0xc0027ec387 0xc0027ec388}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpddb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpddb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jpddb true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027ec400} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027ec420}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:54:47 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:33 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:33 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:54:46 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.5,StartTime:2019-12-15 14:54:47 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-15 14:55:31 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://ffaeea2c40f34bbbb514492b960666eb40e0af6bffb1a043b24b92cb16184875}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 15 14:56:02.692: INFO: Pod "nginx-deployment-7b8c6f4498-wdvrx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wdvrx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2332,SelfLink:/api/v1/namespaces/deployment-2332/pods/nginx-deployment-7b8c6f4498-wdvrx,UID:a6a26b72-ac3c-41e2-8a0e-2894f6a3bbbe,ResourceVersion:16775417,Generation:0,CreationTimestamp:2019-12-15 14:55:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 20676979-3901-4295-90c4-34d99cbc20f6 0xc0027ec4f7 0xc0027ec4f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpddb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpddb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jpddb true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027ec570} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027ec590}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:52 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 15 14:56:02.692: INFO: Pod "nginx-deployment-7b8c6f4498-xcs6l" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xcs6l,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2332,SelfLink:/api/v1/namespaces/deployment-2332/pods/nginx-deployment-7b8c6f4498-xcs6l,UID:91f0cca2-eef4-4f59-bc91-fa11c783a05d,ResourceVersion:16775297,Generation:0,CreationTimestamp:2019-12-15 14:54:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 20676979-3901-4295-90c4-34d99cbc20f6 0xc0027ec617 0xc0027ec618}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpddb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpddb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jpddb true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027ec680} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027ec6a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:54:52 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:35 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:35 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:54:47 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2019-12-15 14:54:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-15 14:55:33 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://14f01507be2502c9766e6cab59dcd2eeb6eafb7c80a3fb4d47040c62fdef772d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 15 14:56:02.694: INFO: Pod "nginx-deployment-7b8c6f4498-xngdr" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xngdr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2332,SelfLink:/api/v1/namespaces/deployment-2332/pods/nginx-deployment-7b8c6f4498-xngdr,UID:a1a20526-0930-4a65-82c9-43ece25a52e7,ResourceVersion:16775260,Generation:0,CreationTimestamp:2019-12-15 14:54:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 20676979-3901-4295-90c4-34d99cbc20f6 0xc0027ec777 0xc0027ec778}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jpddb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jpddb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jpddb true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027ec7e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027ec800}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:54:51 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:31 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:55:31 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 14:54:47 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2019-12-15 14:54:51 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-15 14:55:27 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://38d86554a0d251bce0fb981dc6d71173936bf15a5812d77903b79434f3b5bfba}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:56:02.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2332" for this suite.
Dec 15 14:57:34.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:57:35.024: INFO: namespace deployment-2332 deletion completed in 1m30.870480908s

• [SLOW TEST:168.537 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:57:35.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-478749b8-ca59-41cf-9f56-1499bfe735d3
STEP: Creating a pod to test consume secrets
Dec 15 14:57:35.285: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8ea9d0a7-9e5e-42eb-9bee-ed0594c78270" in namespace "projected-3574" to be "success or failure"
Dec 15 14:57:35.312: INFO: Pod "pod-projected-secrets-8ea9d0a7-9e5e-42eb-9bee-ed0594c78270": Phase="Pending", Reason="", readiness=false. Elapsed: 27.614292ms
Dec 15 14:57:37.324: INFO: Pod "pod-projected-secrets-8ea9d0a7-9e5e-42eb-9bee-ed0594c78270": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039114807s
Dec 15 14:57:39.363: INFO: Pod "pod-projected-secrets-8ea9d0a7-9e5e-42eb-9bee-ed0594c78270": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078712147s
Dec 15 14:57:41.374: INFO: Pod "pod-projected-secrets-8ea9d0a7-9e5e-42eb-9bee-ed0594c78270": Phase="Pending", Reason="", readiness=false. Elapsed: 6.089744612s
Dec 15 14:57:43.382: INFO: Pod "pod-projected-secrets-8ea9d0a7-9e5e-42eb-9bee-ed0594c78270": Phase="Pending", Reason="", readiness=false. Elapsed: 8.096908461s
Dec 15 14:57:45.391: INFO: Pod "pod-projected-secrets-8ea9d0a7-9e5e-42eb-9bee-ed0594c78270": Phase="Pending", Reason="", readiness=false. Elapsed: 10.106564989s
Dec 15 14:57:47.399: INFO: Pod "pod-projected-secrets-8ea9d0a7-9e5e-42eb-9bee-ed0594c78270": Phase="Pending", Reason="", readiness=false. Elapsed: 12.113818176s
Dec 15 14:57:50.984: INFO: Pod "pod-projected-secrets-8ea9d0a7-9e5e-42eb-9bee-ed0594c78270": Phase="Pending", Reason="", readiness=false. Elapsed: 15.699250466s
Dec 15 14:57:52.999: INFO: Pod "pod-projected-secrets-8ea9d0a7-9e5e-42eb-9bee-ed0594c78270": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.714702067s
STEP: Saw pod success
Dec 15 14:57:53.000: INFO: Pod "pod-projected-secrets-8ea9d0a7-9e5e-42eb-9bee-ed0594c78270" satisfied condition "success or failure"
Dec 15 14:57:53.018: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-8ea9d0a7-9e5e-42eb-9bee-ed0594c78270 container projected-secret-volume-test: 
STEP: delete the pod
Dec 15 14:57:53.268: INFO: Waiting for pod pod-projected-secrets-8ea9d0a7-9e5e-42eb-9bee-ed0594c78270 to disappear
Dec 15 14:57:53.294: INFO: Pod pod-projected-secrets-8ea9d0a7-9e5e-42eb-9bee-ed0594c78270 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:57:53.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3574" for this suite.
Dec 15 14:57:59.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:57:59.477: INFO: namespace projected-3574 deletion completed in 6.173221686s

• [SLOW TEST:24.452 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:57:59.477: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 15 14:57:59.644: INFO: Waiting up to 5m0s for pod "pod-2ed31d1f-4d04-42db-820e-ea8f6ed62901" in namespace "emptydir-5388" to be "success or failure"
Dec 15 14:57:59.687: INFO: Pod "pod-2ed31d1f-4d04-42db-820e-ea8f6ed62901": Phase="Pending", Reason="", readiness=false. Elapsed: 43.103337ms
Dec 15 14:58:01.701: INFO: Pod "pod-2ed31d1f-4d04-42db-820e-ea8f6ed62901": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056727076s
Dec 15 14:58:03.707: INFO: Pod "pod-2ed31d1f-4d04-42db-820e-ea8f6ed62901": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06250264s
Dec 15 14:58:05.720: INFO: Pod "pod-2ed31d1f-4d04-42db-820e-ea8f6ed62901": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075719537s
Dec 15 14:58:07.728: INFO: Pod "pod-2ed31d1f-4d04-42db-820e-ea8f6ed62901": Phase="Pending", Reason="", readiness=false. Elapsed: 8.083352704s
Dec 15 14:58:09.755: INFO: Pod "pod-2ed31d1f-4d04-42db-820e-ea8f6ed62901": Phase="Pending", Reason="", readiness=false. Elapsed: 10.110746278s
Dec 15 14:58:11.766: INFO: Pod "pod-2ed31d1f-4d04-42db-820e-ea8f6ed62901": Phase="Pending", Reason="", readiness=false. Elapsed: 12.121515297s
Dec 15 14:58:13.786: INFO: Pod "pod-2ed31d1f-4d04-42db-820e-ea8f6ed62901": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.141826676s
STEP: Saw pod success
Dec 15 14:58:13.786: INFO: Pod "pod-2ed31d1f-4d04-42db-820e-ea8f6ed62901" satisfied condition "success or failure"
Dec 15 14:58:13.796: INFO: Trying to get logs from node iruya-node pod pod-2ed31d1f-4d04-42db-820e-ea8f6ed62901 container test-container: 
STEP: delete the pod
Dec 15 14:58:14.027: INFO: Waiting for pod pod-2ed31d1f-4d04-42db-820e-ea8f6ed62901 to disappear
Dec 15 14:58:14.083: INFO: Pod pod-2ed31d1f-4d04-42db-820e-ea8f6ed62901 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:58:14.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5388" for this suite.
Dec 15 14:58:20.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:58:20.371: INFO: namespace emptydir-5388 deletion completed in 6.268869808s

• [SLOW TEST:20.894 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:58:20.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Dec 15 14:58:20.586: INFO: Waiting up to 5m0s for pod "client-containers-08656c8f-39d6-465e-8287-5b35601d7d10" in namespace "containers-1888" to be "success or failure"
Dec 15 14:58:20.683: INFO: Pod "client-containers-08656c8f-39d6-465e-8287-5b35601d7d10": Phase="Pending", Reason="", readiness=false. Elapsed: 96.840908ms
Dec 15 14:58:22.690: INFO: Pod "client-containers-08656c8f-39d6-465e-8287-5b35601d7d10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10411893s
Dec 15 14:58:24.695: INFO: Pod "client-containers-08656c8f-39d6-465e-8287-5b35601d7d10": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109170072s
Dec 15 14:58:26.709: INFO: Pod "client-containers-08656c8f-39d6-465e-8287-5b35601d7d10": Phase="Pending", Reason="", readiness=false. Elapsed: 6.123281693s
Dec 15 14:58:28.715: INFO: Pod "client-containers-08656c8f-39d6-465e-8287-5b35601d7d10": Phase="Pending", Reason="", readiness=false. Elapsed: 8.129353091s
Dec 15 14:58:30.731: INFO: Pod "client-containers-08656c8f-39d6-465e-8287-5b35601d7d10": Phase="Pending", Reason="", readiness=false. Elapsed: 10.145337043s
Dec 15 14:58:32.750: INFO: Pod "client-containers-08656c8f-39d6-465e-8287-5b35601d7d10": Phase="Pending", Reason="", readiness=false. Elapsed: 12.163757986s
Dec 15 14:58:34.757: INFO: Pod "client-containers-08656c8f-39d6-465e-8287-5b35601d7d10": Phase="Pending", Reason="", readiness=false. Elapsed: 14.17135084s
Dec 15 14:58:36.766: INFO: Pod "client-containers-08656c8f-39d6-465e-8287-5b35601d7d10": Phase="Pending", Reason="", readiness=false. Elapsed: 16.180210094s
Dec 15 14:58:38.773: INFO: Pod "client-containers-08656c8f-39d6-465e-8287-5b35601d7d10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.187117744s
STEP: Saw pod success
Dec 15 14:58:38.773: INFO: Pod "client-containers-08656c8f-39d6-465e-8287-5b35601d7d10" satisfied condition "success or failure"
Dec 15 14:58:38.776: INFO: Trying to get logs from node iruya-node pod client-containers-08656c8f-39d6-465e-8287-5b35601d7d10 container test-container: 
STEP: delete the pod
Dec 15 14:58:38.826: INFO: Waiting for pod client-containers-08656c8f-39d6-465e-8287-5b35601d7d10 to disappear
Dec 15 14:58:38.832: INFO: Pod client-containers-08656c8f-39d6-465e-8287-5b35601d7d10 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:58:38.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1888" for this suite.
Dec 15 14:58:45.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:58:45.447: INFO: namespace containers-1888 deletion completed in 6.610969987s

• [SLOW TEST:25.075 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:58:45.447: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Dec 15 14:59:02.269: INFO: Successfully updated pod "pod-update-activedeadlineseconds-dd1dd242-a049-4df2-a26f-fbbb7694c0cd"
Dec 15 14:59:02.269: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-dd1dd242-a049-4df2-a26f-fbbb7694c0cd" in namespace "pods-9416" to be "terminated due to deadline exceeded"
Dec 15 14:59:02.279: INFO: Pod "pod-update-activedeadlineseconds-dd1dd242-a049-4df2-a26f-fbbb7694c0cd": Phase="Running", Reason="", readiness=true. Elapsed: 8.981119ms
Dec 15 14:59:04.286: INFO: Pod "pod-update-activedeadlineseconds-dd1dd242-a049-4df2-a26f-fbbb7694c0cd": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.016164013s
Dec 15 14:59:04.286: INFO: Pod "pod-update-activedeadlineseconds-dd1dd242-a049-4df2-a26f-fbbb7694c0cd" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:59:04.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9416" for this suite.
Dec 15 14:59:10.327: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:59:10.427: INFO: namespace pods-9416 deletion completed in 6.135830914s

• [SLOW TEST:24.979 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:59:10.427: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 15 14:59:10.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-4577'
Dec 15 14:59:13.095: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 15 14:59:13.095: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Dec 15 14:59:13.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-4577'
Dec 15 14:59:13.442: INFO: stderr: ""
Dec 15 14:59:13.443: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:59:13.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4577" for this suite.
Dec 15 14:59:35.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:59:35.746: INFO: namespace kubectl-4577 deletion completed in 22.291914095s

• [SLOW TEST:25.319 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:59:35.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-045ec79a-69ac-41d0-896d-22e5020438b4
STEP: Creating a pod to test consume configMaps
Dec 15 14:59:36.224: INFO: Waiting up to 5m0s for pod "pod-configmaps-8bc9ff33-20b5-4c9f-83a2-259e21b5fb1c" in namespace "configmap-3026" to be "success or failure"
Dec 15 14:59:36.395: INFO: Pod "pod-configmaps-8bc9ff33-20b5-4c9f-83a2-259e21b5fb1c": Phase="Pending", Reason="", readiness=false. Elapsed: 170.480516ms
Dec 15 14:59:38.409: INFO: Pod "pod-configmaps-8bc9ff33-20b5-4c9f-83a2-259e21b5fb1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.184340062s
Dec 15 14:59:40.419: INFO: Pod "pod-configmaps-8bc9ff33-20b5-4c9f-83a2-259e21b5fb1c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.194420317s
Dec 15 14:59:42.426: INFO: Pod "pod-configmaps-8bc9ff33-20b5-4c9f-83a2-259e21b5fb1c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.201463433s
Dec 15 14:59:44.433: INFO: Pod "pod-configmaps-8bc9ff33-20b5-4c9f-83a2-259e21b5fb1c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.208516261s
Dec 15 14:59:46.454: INFO: Pod "pod-configmaps-8bc9ff33-20b5-4c9f-83a2-259e21b5fb1c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.229646702s
Dec 15 14:59:48.471: INFO: Pod "pod-configmaps-8bc9ff33-20b5-4c9f-83a2-259e21b5fb1c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.247056353s
Dec 15 14:59:50.489: INFO: Pod "pod-configmaps-8bc9ff33-20b5-4c9f-83a2-259e21b5fb1c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.264958505s
Dec 15 14:59:52.514: INFO: Pod "pod-configmaps-8bc9ff33-20b5-4c9f-83a2-259e21b5fb1c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.289797038s
STEP: Saw pod success
Dec 15 14:59:52.514: INFO: Pod "pod-configmaps-8bc9ff33-20b5-4c9f-83a2-259e21b5fb1c" satisfied condition "success or failure"
Dec 15 14:59:52.522: INFO: Trying to get logs from node iruya-node pod pod-configmaps-8bc9ff33-20b5-4c9f-83a2-259e21b5fb1c container configmap-volume-test: 
STEP: delete the pod
Dec 15 14:59:52.746: INFO: Waiting for pod pod-configmaps-8bc9ff33-20b5-4c9f-83a2-259e21b5fb1c to disappear
Dec 15 14:59:52.758: INFO: Pod pod-configmaps-8bc9ff33-20b5-4c9f-83a2-259e21b5fb1c no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:59:52.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3026" for this suite.
Dec 15 14:59:58.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 14:59:58.901: INFO: namespace configmap-3026 deletion completed in 6.133958698s

• [SLOW TEST:23.153 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 14:59:58.902: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Dec 15 14:59:59.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Dec 15 14:59:59.319: INFO: stderr: ""
Dec 15 14:59:59.319: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 14:59:59.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6871" for this suite.
Dec 15 15:00:05.360: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 15:00:05.543: INFO: namespace kubectl-6871 deletion completed in 6.209211947s

• [SLOW TEST:6.642 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 15:00:05.544: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-8698/secret-test-028e7f67-34b3-488f-82ae-21333a696e86
STEP: Creating a pod to test consume secrets
Dec 15 15:00:05.719: INFO: Waiting up to 5m0s for pod "pod-configmaps-d85bb85a-a35a-4895-9ea8-b705bbdce2f1" in namespace "secrets-8698" to be "success or failure"
Dec 15 15:00:05.812: INFO: Pod "pod-configmaps-d85bb85a-a35a-4895-9ea8-b705bbdce2f1": Phase="Pending", Reason="", readiness=false. Elapsed: 92.370034ms
Dec 15 15:00:07.818: INFO: Pod "pod-configmaps-d85bb85a-a35a-4895-9ea8-b705bbdce2f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098504432s
Dec 15 15:00:09.827: INFO: Pod "pod-configmaps-d85bb85a-a35a-4895-9ea8-b705bbdce2f1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107281858s
Dec 15 15:00:11.897: INFO: Pod "pod-configmaps-d85bb85a-a35a-4895-9ea8-b705bbdce2f1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.177616998s
Dec 15 15:00:13.914: INFO: Pod "pod-configmaps-d85bb85a-a35a-4895-9ea8-b705bbdce2f1": Phase="Running", Reason="", readiness=true. Elapsed: 8.194493789s
Dec 15 15:00:15.923: INFO: Pod "pod-configmaps-d85bb85a-a35a-4895-9ea8-b705bbdce2f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.203070578s
STEP: Saw pod success
Dec 15 15:00:15.923: INFO: Pod "pod-configmaps-d85bb85a-a35a-4895-9ea8-b705bbdce2f1" satisfied condition "success or failure"
Dec 15 15:00:15.927: INFO: Trying to get logs from node iruya-node pod pod-configmaps-d85bb85a-a35a-4895-9ea8-b705bbdce2f1 container env-test: 
STEP: delete the pod
Dec 15 15:00:16.029: INFO: Waiting for pod pod-configmaps-d85bb85a-a35a-4895-9ea8-b705bbdce2f1 to disappear
Dec 15 15:00:16.035: INFO: Pod pod-configmaps-d85bb85a-a35a-4895-9ea8-b705bbdce2f1 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 15:00:16.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8698" for this suite.
Dec 15 15:00:22.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 15:00:22.156: INFO: namespace secrets-8698 deletion completed in 6.115889058s

• [SLOW TEST:16.612 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 15:00:22.157: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-2780
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 15 15:00:22.387: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 15 15:01:02.748: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-2780 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 15 15:01:02.748: INFO: >>> kubeConfig: /root/.kube/config
Dec 15 15:01:03.435: INFO: Waiting for endpoints: map[]
Dec 15 15:01:03.450: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-2780 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 15 15:01:03.450: INFO: >>> kubeConfig: /root/.kube/config
Dec 15 15:01:03.953: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 15:01:03.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-2780" for this suite.
Dec 15 15:01:29.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 15:01:30.115: INFO: namespace pod-network-test-2780 deletion completed in 26.149667352s

• [SLOW TEST:67.958 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 15:01:30.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Dec 15 15:01:30.261: INFO: Waiting up to 5m0s for pod "pod-8406c8ae-dfb5-4ac9-a5eb-16c51b98244a" in namespace "emptydir-234" to be "success or failure"
Dec 15 15:01:30.272: INFO: Pod "pod-8406c8ae-dfb5-4ac9-a5eb-16c51b98244a": Phase="Pending", Reason="", readiness=false. Elapsed: 11.335493ms
Dec 15 15:01:32.280: INFO: Pod "pod-8406c8ae-dfb5-4ac9-a5eb-16c51b98244a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019145101s
Dec 15 15:01:34.289: INFO: Pod "pod-8406c8ae-dfb5-4ac9-a5eb-16c51b98244a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027440377s
Dec 15 15:01:36.297: INFO: Pod "pod-8406c8ae-dfb5-4ac9-a5eb-16c51b98244a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035823794s
Dec 15 15:01:38.303: INFO: Pod "pod-8406c8ae-dfb5-4ac9-a5eb-16c51b98244a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.041786483s
Dec 15 15:01:40.314: INFO: Pod "pod-8406c8ae-dfb5-4ac9-a5eb-16c51b98244a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.052750521s
STEP: Saw pod success
Dec 15 15:01:40.314: INFO: Pod "pod-8406c8ae-dfb5-4ac9-a5eb-16c51b98244a" satisfied condition "success or failure"
Dec 15 15:01:40.317: INFO: Trying to get logs from node iruya-node pod pod-8406c8ae-dfb5-4ac9-a5eb-16c51b98244a container test-container: 
STEP: delete the pod
Dec 15 15:01:40.372: INFO: Waiting for pod pod-8406c8ae-dfb5-4ac9-a5eb-16c51b98244a to disappear
Dec 15 15:01:40.482: INFO: Pod pod-8406c8ae-dfb5-4ac9-a5eb-16c51b98244a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 15:01:40.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-234" for this suite.
Dec 15 15:01:46.526: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 15:01:46.654: INFO: namespace emptydir-234 deletion completed in 6.160353434s

• [SLOW TEST:16.539 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 15:01:46.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-6589708a-9ddd-4be0-91b2-81e815721567
STEP: Creating a pod to test consume secrets
Dec 15 15:01:46.787: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-75e08b8f-283e-4987-a6c2-b15eb1be9fe8" in namespace "projected-4084" to be "success or failure"
Dec 15 15:01:46.794: INFO: Pod "pod-projected-secrets-75e08b8f-283e-4987-a6c2-b15eb1be9fe8": Phase="Pending", Reason="", readiness=false. Elapsed: 7.279711ms
Dec 15 15:01:48.807: INFO: Pod "pod-projected-secrets-75e08b8f-283e-4987-a6c2-b15eb1be9fe8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020584517s
Dec 15 15:01:50.822: INFO: Pod "pod-projected-secrets-75e08b8f-283e-4987-a6c2-b15eb1be9fe8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03493133s
Dec 15 15:01:52.829: INFO: Pod "pod-projected-secrets-75e08b8f-283e-4987-a6c2-b15eb1be9fe8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042279215s
Dec 15 15:01:54.836: INFO: Pod "pod-projected-secrets-75e08b8f-283e-4987-a6c2-b15eb1be9fe8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049164601s
Dec 15 15:01:56.843: INFO: Pod "pod-projected-secrets-75e08b8f-283e-4987-a6c2-b15eb1be9fe8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.056381012s
STEP: Saw pod success
Dec 15 15:01:56.843: INFO: Pod "pod-projected-secrets-75e08b8f-283e-4987-a6c2-b15eb1be9fe8" satisfied condition "success or failure"
Dec 15 15:01:56.847: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-75e08b8f-283e-4987-a6c2-b15eb1be9fe8 container projected-secret-volume-test: 
STEP: delete the pod
Dec 15 15:01:56.988: INFO: Waiting for pod pod-projected-secrets-75e08b8f-283e-4987-a6c2-b15eb1be9fe8 to disappear
Dec 15 15:01:56.999: INFO: Pod pod-projected-secrets-75e08b8f-283e-4987-a6c2-b15eb1be9fe8 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 15:01:56.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4084" for this suite.
Dec 15 15:02:03.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 15:02:03.152: INFO: namespace projected-4084 deletion completed in 6.147063483s

• [SLOW TEST:16.497 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 15:02:03.152: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1048.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-1048.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1048.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1048.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-1048.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1048.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 15 15:02:15.402: INFO: Unable to read wheezy_udp@PodARecord from pod dns-1048/dns-test-ff5004b2-fa5c-4002-8857-88870f7f102b: the server could not find the requested resource (get pods dns-test-ff5004b2-fa5c-4002-8857-88870f7f102b)
Dec 15 15:02:15.412: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-1048/dns-test-ff5004b2-fa5c-4002-8857-88870f7f102b: the server could not find the requested resource (get pods dns-test-ff5004b2-fa5c-4002-8857-88870f7f102b)
Dec 15 15:02:15.431: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-1048.svc.cluster.local from pod dns-1048/dns-test-ff5004b2-fa5c-4002-8857-88870f7f102b: the server could not find the requested resource (get pods dns-test-ff5004b2-fa5c-4002-8857-88870f7f102b)
Dec 15 15:02:15.444: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-1048/dns-test-ff5004b2-fa5c-4002-8857-88870f7f102b: the server could not find the requested resource (get pods dns-test-ff5004b2-fa5c-4002-8857-88870f7f102b)
Dec 15 15:02:15.451: INFO: Unable to read jessie_udp@PodARecord from pod dns-1048/dns-test-ff5004b2-fa5c-4002-8857-88870f7f102b: the server could not find the requested resource (get pods dns-test-ff5004b2-fa5c-4002-8857-88870f7f102b)
Dec 15 15:02:15.456: INFO: Unable to read jessie_tcp@PodARecord from pod dns-1048/dns-test-ff5004b2-fa5c-4002-8857-88870f7f102b: the server could not find the requested resource (get pods dns-test-ff5004b2-fa5c-4002-8857-88870f7f102b)
Dec 15 15:02:15.456: INFO: Lookups using dns-1048/dns-test-ff5004b2-fa5c-4002-8857-88870f7f102b failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-1048.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Dec 15 15:02:20.522: INFO: DNS probes using dns-1048/dns-test-ff5004b2-fa5c-4002-8857-88870f7f102b succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 15:02:20.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1048" for this suite.
Dec 15 15:02:26.737: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 15:02:26.942: INFO: namespace dns-1048 deletion completed in 6.342412162s

• [SLOW TEST:23.791 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 15:02:26.943: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Dec 15 15:02:27.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9200'
Dec 15 15:02:27.463: INFO: stderr: ""
Dec 15 15:02:27.464: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Dec 15 15:02:28.481: INFO: Selector matched 1 pods for map[app:redis]
Dec 15 15:02:28.481: INFO: Found 0 / 1
Dec 15 15:02:29.474: INFO: Selector matched 1 pods for map[app:redis]
Dec 15 15:02:29.474: INFO: Found 0 / 1
Dec 15 15:02:30.483: INFO: Selector matched 1 pods for map[app:redis]
Dec 15 15:02:30.483: INFO: Found 0 / 1
Dec 15 15:02:31.482: INFO: Selector matched 1 pods for map[app:redis]
Dec 15 15:02:31.482: INFO: Found 0 / 1
Dec 15 15:02:32.477: INFO: Selector matched 1 pods for map[app:redis]
Dec 15 15:02:32.477: INFO: Found 0 / 1
Dec 15 15:02:33.486: INFO: Selector matched 1 pods for map[app:redis]
Dec 15 15:02:33.486: INFO: Found 0 / 1
Dec 15 15:02:34.479: INFO: Selector matched 1 pods for map[app:redis]
Dec 15 15:02:34.479: INFO: Found 0 / 1
Dec 15 15:02:35.489: INFO: Selector matched 1 pods for map[app:redis]
Dec 15 15:02:35.489: INFO: Found 0 / 1
Dec 15 15:02:36.479: INFO: Selector matched 1 pods for map[app:redis]
Dec 15 15:02:36.479: INFO: Found 1 / 1
Dec 15 15:02:36.479: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 15 15:02:36.484: INFO: Selector matched 1 pods for map[app:redis]
Dec 15 15:02:36.484: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Dec 15 15:02:36.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-jhqv8 redis-master --namespace=kubectl-9200'
Dec 15 15:02:36.671: INFO: stderr: ""
Dec 15 15:02:36.671: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 15 Dec 15:02:34.862 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 15 Dec 15:02:34.863 # Server started, Redis version 3.2.12\n1:M 15 Dec 15:02:34.864 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 15 Dec 15:02:34.864 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Dec 15 15:02:36.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-jhqv8 redis-master --namespace=kubectl-9200 --tail=1'
Dec 15 15:02:36.822: INFO: stderr: ""
Dec 15 15:02:36.822: INFO: stdout: "1:M 15 Dec 15:02:34.864 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Dec 15 15:02:36.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-jhqv8 redis-master --namespace=kubectl-9200 --limit-bytes=1'
Dec 15 15:02:36.999: INFO: stderr: ""
Dec 15 15:02:36.999: INFO: stdout: " "
STEP: exposing timestamps
Dec 15 15:02:36.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-jhqv8 redis-master --namespace=kubectl-9200 --tail=1 --timestamps'
Dec 15 15:02:37.136: INFO: stderr: ""
Dec 15 15:02:37.136: INFO: stdout: "2019-12-15T15:02:34.866302786Z 1:M 15 Dec 15:02:34.864 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Dec 15 15:02:39.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-jhqv8 redis-master --namespace=kubectl-9200 --since=1s'
Dec 15 15:02:39.864: INFO: stderr: ""
Dec 15 15:02:39.865: INFO: stdout: ""
Dec 15 15:02:39.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-jhqv8 redis-master --namespace=kubectl-9200 --since=24h'
Dec 15 15:02:40.017: INFO: stderr: ""
Dec 15 15:02:40.017: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 15 Dec 15:02:34.862 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 15 Dec 15:02:34.863 # Server started, Redis version 3.2.12\n1:M 15 Dec 15:02:34.864 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 15 Dec 15:02:34.864 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Dec 15 15:02:40.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9200'
Dec 15 15:02:40.164: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 15 15:02:40.164: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Dec 15 15:02:40.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-9200'
Dec 15 15:02:40.306: INFO: stderr: "No resources found.\n"
Dec 15 15:02:40.306: INFO: stdout: ""
Dec 15 15:02:40.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-9200 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 15 15:02:40.539: INFO: stderr: ""
Dec 15 15:02:40.540: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 15:02:40.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9200" for this suite.
Dec 15 15:03:02.593: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 15:03:02.732: INFO: namespace kubectl-9200 deletion completed in 22.181974663s

• [SLOW TEST:35.789 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 15:03:02.733: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Dec 15 15:03:02.802: INFO: Waiting up to 5m0s for pod "pod-278743b7-031c-46fb-90ba-26cc7c055614" in namespace "emptydir-5253" to be "success or failure"
Dec 15 15:03:02.810: INFO: Pod "pod-278743b7-031c-46fb-90ba-26cc7c055614": Phase="Pending", Reason="", readiness=false. Elapsed: 8.034259ms
Dec 15 15:03:04.816: INFO: Pod "pod-278743b7-031c-46fb-90ba-26cc7c055614": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013861592s
Dec 15 15:03:06.830: INFO: Pod "pod-278743b7-031c-46fb-90ba-26cc7c055614": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028203813s
Dec 15 15:03:08.863: INFO: Pod "pod-278743b7-031c-46fb-90ba-26cc7c055614": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060828526s
Dec 15 15:03:10.878: INFO: Pod "pod-278743b7-031c-46fb-90ba-26cc7c055614": Phase="Pending", Reason="", readiness=false. Elapsed: 8.075436385s
Dec 15 15:03:12.886: INFO: Pod "pod-278743b7-031c-46fb-90ba-26cc7c055614": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.08371437s
STEP: Saw pod success
Dec 15 15:03:12.886: INFO: Pod "pod-278743b7-031c-46fb-90ba-26cc7c055614" satisfied condition "success or failure"
Dec 15 15:03:12.891: INFO: Trying to get logs from node iruya-node pod pod-278743b7-031c-46fb-90ba-26cc7c055614 container test-container: 
STEP: delete the pod
Dec 15 15:03:13.085: INFO: Waiting for pod pod-278743b7-031c-46fb-90ba-26cc7c055614 to disappear
Dec 15 15:03:13.131: INFO: Pod pod-278743b7-031c-46fb-90ba-26cc7c055614 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 15:03:13.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5253" for this suite.
Dec 15 15:03:19.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 15:03:19.338: INFO: namespace emptydir-5253 deletion completed in 6.198158547s

• [SLOW TEST:16.605 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 15:03:19.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-78f18cf8-2f74-4f94-b4e9-193d292cd8d9
STEP: Creating a pod to test consume secrets
Dec 15 15:03:19.430: INFO: Waiting up to 5m0s for pod "pod-secrets-3e60c1ec-7155-4390-8aa2-f93dee4ce25d" in namespace "secrets-2064" to be "success or failure"
Dec 15 15:03:19.436: INFO: Pod "pod-secrets-3e60c1ec-7155-4390-8aa2-f93dee4ce25d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.50807ms
Dec 15 15:03:21.447: INFO: Pod "pod-secrets-3e60c1ec-7155-4390-8aa2-f93dee4ce25d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016242893s
Dec 15 15:03:23.455: INFO: Pod "pod-secrets-3e60c1ec-7155-4390-8aa2-f93dee4ce25d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024245549s
Dec 15 15:03:25.472: INFO: Pod "pod-secrets-3e60c1ec-7155-4390-8aa2-f93dee4ce25d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041365981s
Dec 15 15:03:27.477: INFO: Pod "pod-secrets-3e60c1ec-7155-4390-8aa2-f93dee4ce25d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.046752402s
Dec 15 15:03:29.488: INFO: Pod "pod-secrets-3e60c1ec-7155-4390-8aa2-f93dee4ce25d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.057749278s
STEP: Saw pod success
Dec 15 15:03:29.488: INFO: Pod "pod-secrets-3e60c1ec-7155-4390-8aa2-f93dee4ce25d" satisfied condition "success or failure"
Dec 15 15:03:29.493: INFO: Trying to get logs from node iruya-node pod pod-secrets-3e60c1ec-7155-4390-8aa2-f93dee4ce25d container secret-volume-test: 
STEP: delete the pod
Dec 15 15:03:29.670: INFO: Waiting for pod pod-secrets-3e60c1ec-7155-4390-8aa2-f93dee4ce25d to disappear
Dec 15 15:03:29.680: INFO: Pod pod-secrets-3e60c1ec-7155-4390-8aa2-f93dee4ce25d no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 15:03:29.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2064" for this suite.
Dec 15 15:03:35.724: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 15:03:35.926: INFO: namespace secrets-2064 deletion completed in 6.237562918s

• [SLOW TEST:16.588 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 15:03:35.926: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 15 15:03:36.167: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cdbe8256-5ebe-4969-b33d-de36e3666400" in namespace "projected-3798" to be "success or failure"
Dec 15 15:03:36.178: INFO: Pod "downwardapi-volume-cdbe8256-5ebe-4969-b33d-de36e3666400": Phase="Pending", Reason="", readiness=false. Elapsed: 10.443997ms
Dec 15 15:03:38.187: INFO: Pod "downwardapi-volume-cdbe8256-5ebe-4969-b33d-de36e3666400": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019694005s
Dec 15 15:03:42.325: INFO: Pod "downwardapi-volume-cdbe8256-5ebe-4969-b33d-de36e3666400": Phase="Pending", Reason="", readiness=false. Elapsed: 6.158085583s
Dec 15 15:03:44.389: INFO: Pod "downwardapi-volume-cdbe8256-5ebe-4969-b33d-de36e3666400": Phase="Pending", Reason="", readiness=false. Elapsed: 8.221658501s
Dec 15 15:03:46.456: INFO: Pod "downwardapi-volume-cdbe8256-5ebe-4969-b33d-de36e3666400": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.288658306s
STEP: Saw pod success
Dec 15 15:03:46.456: INFO: Pod "downwardapi-volume-cdbe8256-5ebe-4969-b33d-de36e3666400" satisfied condition "success or failure"
Dec 15 15:03:46.463: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-cdbe8256-5ebe-4969-b33d-de36e3666400 container client-container: 
STEP: delete the pod
Dec 15 15:03:46.532: INFO: Waiting for pod downwardapi-volume-cdbe8256-5ebe-4969-b33d-de36e3666400 to disappear
Dec 15 15:03:46.584: INFO: Pod downwardapi-volume-cdbe8256-5ebe-4969-b33d-de36e3666400 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 15:03:46.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3798" for this suite.
Dec 15 15:03:52.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 15:03:52.771: INFO: namespace projected-3798 deletion completed in 6.174038659s

• [SLOW TEST:16.845 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 15:03:52.772: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-582
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Dec 15 15:03:53.040: INFO: Found 0 stateful pods, waiting for 3
Dec 15 15:04:03.048: INFO: Found 2 stateful pods, waiting for 3
Dec 15 15:04:13.050: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 15 15:04:13.050: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 15 15:04:13.050: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 15 15:04:23.067: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 15 15:04:23.067: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 15 15:04:23.067: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Dec 15 15:04:23.110: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Dec 15 15:04:33.192: INFO: Updating stateful set ss2
Dec 15 15:04:33.216: INFO: Waiting for Pod statefulset-582/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Dec 15 15:04:43.704: INFO: Found 2 stateful pods, waiting for 3
Dec 15 15:04:53.719: INFO: Found 2 stateful pods, waiting for 3
Dec 15 15:05:03.720: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 15 15:05:03.720: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 15 15:05:03.720: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false
Dec 15 15:05:13.721: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 15 15:05:13.721: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 15 15:05:13.721: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Dec 15 15:05:13.785: INFO: Updating stateful set ss2
Dec 15 15:05:13.889: INFO: Waiting for Pod statefulset-582/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 15 15:05:23.946: INFO: Waiting for Pod statefulset-582/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 15 15:05:33.974: INFO: Updating stateful set ss2
Dec 15 15:05:34.050: INFO: Waiting for StatefulSet statefulset-582/ss2 to complete update
Dec 15 15:05:34.051: INFO: Waiting for Pod statefulset-582/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 15 15:05:44.062: INFO: Waiting for StatefulSet statefulset-582/ss2 to complete update
Dec 15 15:05:44.062: INFO: Waiting for Pod statefulset-582/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 15 15:05:54.070: INFO: Deleting all statefulset in ns statefulset-582
Dec 15 15:05:54.072: INFO: Scaling statefulset ss2 to 0
Dec 15 15:06:34.096: INFO: Waiting for statefulset status.replicas updated to 0
Dec 15 15:06:34.099: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 15:06:34.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-582" for this suite.
Dec 15 15:06:42.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 15:06:42.269: INFO: namespace statefulset-582 deletion completed in 8.14692628s

• [SLOW TEST:169.497 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 15:06:42.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-7136
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-7136
STEP: Creating statefulset with conflicting port in namespace statefulset-7136
STEP: Waiting until pod test-pod will start running in namespace statefulset-7136
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7136
Dec 15 15:06:54.603: INFO: Observed stateful pod in namespace: statefulset-7136, name: ss-0, uid: 6dd7ef8c-f93b-440e-8e9d-c05e4fbe39b6, status phase: Pending. Waiting for statefulset controller to delete.
Dec 15 15:11:54.604: INFO: Pod ss-0 expected to be re-created at least once
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 15 15:11:54.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe po ss-0 --namespace=statefulset-7136'
Dec 15 15:11:57.175: INFO: stderr: ""
Dec 15 15:11:57.176: INFO: stdout: "Name:           ss-0\nNamespace:      statefulset-7136\nPriority:       0\nNode:           iruya-node/\nLabels:         baz=blah\n                controller-revision-hash=ss-6f98bdb9c4\n                foo=bar\n                statefulset.kubernetes.io/pod-name=ss-0\nAnnotations:    \nStatus:         Pending\nIP:             \nControlled By:  StatefulSet/ss\nContainers:\n  nginx:\n    Image:        docker.io/library/nginx:1.14-alpine\n    Port:         21017/TCP\n    Host Port:    21017/TCP\n    Environment:  \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-qh9ph (ro)\nVolumes:\n  default-token-qh9ph:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-qh9ph\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type     Reason            Age   From                 Message\n  ----     ------            ----  ----                 -------\n  Warning  PodFitsHostPorts  5m7s  kubelet, iruya-node  Predicate PodFitsHostPorts failed\n"
Dec 15 15:11:57.176: INFO: 
Output of kubectl describe ss-0:
Name:           ss-0
Namespace:      statefulset-7136
Priority:       0
Node:           iruya-node/
Labels:         baz=blah
                controller-revision-hash=ss-6f98bdb9c4
                foo=bar
                statefulset.kubernetes.io/pod-name=ss-0
Annotations:    
Status:         Pending
IP:             
Controlled By:  StatefulSet/ss
Containers:
  nginx:
    Image:        docker.io/library/nginx:1.14-alpine
    Port:         21017/TCP
    Host Port:    21017/TCP
    Environment:  
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-qh9ph (ro)
Volumes:
  default-token-qh9ph:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-qh9ph
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age   From                 Message
  ----     ------            ----  ----                 -------
  Warning  PodFitsHostPorts  5m7s  kubelet, iruya-node  Predicate PodFitsHostPorts failed

Dec 15 15:11:57.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs ss-0 --namespace=statefulset-7136 --tail=100'
Dec 15 15:11:57.307: INFO: rc: 1
Dec 15 15:11:57.308: INFO: 
Last 100 log lines of ss-0:

Dec 15 15:11:57.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe po test-pod --namespace=statefulset-7136'
Dec 15 15:11:57.452: INFO: stderr: ""
Dec 15 15:11:57.452: INFO: stdout: "Name:         test-pod\nNamespace:    statefulset-7136\nPriority:     0\nNode:         iruya-node/10.96.3.65\nStart Time:   Sun, 15 Dec 2019 15:06:42 +0000\nLabels:       \nAnnotations:  \nStatus:       Running\nIP:           10.44.0.1\nContainers:\n  nginx:\n    Container ID:   docker://1b82d12dc975ab26236311d5752aa638c5eaa34eed38de57ac2f3693545c30d2\n    Image:          docker.io/library/nginx:1.14-alpine\n    Image ID:       docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\n    Port:           21017/TCP\n    Host Port:      21017/TCP\n    State:          Running\n      Started:      Sun, 15 Dec 2019 15:06:52 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-qh9ph (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-qh9ph:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-qh9ph\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason   Age   From                 Message\n  ----    ------   ----  ----                 -------\n  Normal  Pulled   5m9s  kubelet, iruya-node  Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\n  Normal  Created  5m5s  kubelet, iruya-node  Created container nginx\n  Normal  Started  5m5s  kubelet, iruya-node  Started container nginx\n"
Dec 15 15:11:57.452: INFO: 
Output of kubectl describe test-pod:
Name:         test-pod
Namespace:    statefulset-7136
Priority:     0
Node:         iruya-node/10.96.3.65
Start Time:   Sun, 15 Dec 2019 15:06:42 +0000
Labels:       
Annotations:  
Status:       Running
IP:           10.44.0.1
Containers:
  nginx:
    Container ID:   docker://1b82d12dc975ab26236311d5752aa638c5eaa34eed38de57ac2f3693545c30d2
    Image:          docker.io/library/nginx:1.14-alpine
    Image ID:       docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7
    Port:           21017/TCP
    Host Port:      21017/TCP
    State:          Running
      Started:      Sun, 15 Dec 2019 15:06:52 +0000
    Ready:          True
    Restart Count:  0
    Environment:    
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-qh9ph (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-qh9ph:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-qh9ph
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason   Age   From                 Message
  ----    ------   ----  ----                 -------
  Normal  Pulled   5m9s  kubelet, iruya-node  Container image "docker.io/library/nginx:1.14-alpine" already present on machine
  Normal  Created  5m5s  kubelet, iruya-node  Created container nginx
  Normal  Started  5m5s  kubelet, iruya-node  Started container nginx

Dec 15 15:11:57.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs test-pod --namespace=statefulset-7136 --tail=100'
Dec 15 15:11:57.622: INFO: stderr: ""
Dec 15 15:11:57.622: INFO: stdout: ""
Dec 15 15:11:57.622: INFO: 
Last 100 log lines of test-pod:

Dec 15 15:11:57.622: INFO: Deleting all statefulset in ns statefulset-7136
Dec 15 15:11:57.632: INFO: Scaling statefulset ss to 0
Dec 15 15:12:07.659: INFO: Waiting for statefulset status.replicas updated to 0
Dec 15 15:12:07.664: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Collecting events from namespace "statefulset-7136".
STEP: Found 19 events.
Dec 15 15:12:07.699: INFO: At 2019-12-15 15:06:42 +0000 UTC - event for ss: {statefulset-controller } RecreatingFailedPod: StatefulSet statefulset-7136/ss is recreating failed Pod ss-0
Dec 15 15:12:07.699: INFO: At 2019-12-15 15:06:42 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-0 in StatefulSet ss successful
Dec 15 15:12:07.699: INFO: At 2019-12-15 15:06:42 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-0 in StatefulSet ss successful
Dec 15 15:12:07.699: INFO: At 2019-12-15 15:06:42 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Dec 15 15:12:07.699: INFO: At 2019-12-15 15:06:42 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Dec 15 15:12:07.699: INFO: At 2019-12-15 15:06:43 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: The POST operation against Pod could not be completed at this time, please try again.
Dec 15 15:12:07.699: INFO: At 2019-12-15 15:06:44 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Dec 15 15:12:07.699: INFO: At 2019-12-15 15:06:44 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Dec 15 15:12:07.699: INFO: At 2019-12-15 15:06:45 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Dec 15 15:12:07.699: INFO: At 2019-12-15 15:06:46 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Dec 15 15:12:07.699: INFO: At 2019-12-15 15:06:46 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Dec 15 15:12:07.699: INFO: At 2019-12-15 15:06:46 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Dec 15 15:12:07.699: INFO: At 2019-12-15 15:06:47 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Dec 15 15:12:07.699: INFO: At 2019-12-15 15:06:47 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Dec 15 15:12:07.699: INFO: At 2019-12-15 15:06:48 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Dec 15 15:12:07.699: INFO: At 2019-12-15 15:06:48 +0000 UTC - event for test-pod: {kubelet iruya-node} Pulled: Container image "docker.io/library/nginx:1.14-alpine" already present on machine
Dec 15 15:12:07.699: INFO: At 2019-12-15 15:06:50 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Dec 15 15:12:07.699: INFO: At 2019-12-15 15:06:52 +0000 UTC - event for test-pod: {kubelet iruya-node} Created: Created container nginx
Dec 15 15:12:07.699: INFO: At 2019-12-15 15:06:52 +0000 UTC - event for test-pod: {kubelet iruya-node} Started: Started container nginx
Dec 15 15:12:07.704: INFO: POD       NODE        PHASE    GRACE  CONDITIONS
Dec 15 15:12:07.705: INFO: test-pod  iruya-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 15:06:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 15:06:53 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 15:06:53 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 15:06:42 +0000 UTC  }]
Dec 15 15:12:07.705: INFO: 
Dec 15 15:12:07.732: INFO: 
Logging node info for node iruya-node
Dec 15 15:12:07.737: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:iruya-node,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/iruya-node,UID:b2aa273d-23ea-4c86-9e2f-72569e3392bd,ResourceVersion:16777666,Generation:0,CreationTimestamp:2019-08-04 09:01:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: iruya-node,kubernetes.io/os: linux,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.96.1.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-10-12 11:56:49 +0000 UTC 2019-10-12 11:56:49 +0000 UTC WeaveIsUp Weave pod has set this} {MemoryPressure False 2019-12-15 15:11:38 +0000 UTC 2019-08-04 09:01:39 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-12-15 15:11:38 +0000 UTC 2019-08-04 09:01:39 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-12-15 15:11:38 +0000 UTC 2019-08-04 09:01:39 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-12-15 15:11:38 +0000 UTC 2019-08-04 09:02:19 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.96.3.65} {Hostname iruya-node}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f573dcf04d6f4a87856a35d266a2fa7a,SystemUUID:F573DCF0-4D6F-4A87-856A-35D266A2FA7A,BootID:8baf4beb-8391-43e6-b17b-b1e184b5370a,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.15.1,KubeProxyVersion:v1.15.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 373099368} {[k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7 k8s.gcr.io/etcd:3.3.10] 258116302} {[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15] 246640776} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 195659796} {[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2] 148150868} {[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine] 126894770} {[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine] 123781643} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 98945667} {[k8s.gcr.io/kube-proxy@sha256:08186f4897488e96cb098dd8d1d931af9a6ea718bb8737bf44bb76e42075f0ce k8s.gcr.io/kube-proxy:v1.15.1] 82408284} {[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10] 61365829} {[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6] 57345321} {[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2] 49569458} {[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine] 29331594} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 27413498} {[nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 nginx:1.15-alpine] 16087791} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0] 11443478} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 9349974} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0] 6757579} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:71c3fc838e0637df570497febafa0ee73bf47176dfd43612de5c55a71230674e gcr.io/kubernetes-e2e-test-images/liveness:1.1] 5829944} {[appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 appropriate/curl:latest] 5496756} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 4681408} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e busybox:latest] 1219782} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472} {[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest] 239840}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Dec 15 15:12:07.738: INFO: 
Logging kubelet events for node iruya-node
Dec 15 15:12:07.745: INFO: 
Logging pods the kubelet thinks is on node iruya-node
Dec 15 15:12:07.763: INFO: test-pod started at 2019-12-15 15:06:42 +0000 UTC (0+1 container statuses recorded)
Dec 15 15:12:07.763: INFO: 	Container nginx ready: true, restart count 0
Dec 15 15:12:07.763: INFO: weave-net-rlp57 started at 2019-10-12 11:56:39 +0000 UTC (0+2 container statuses recorded)
Dec 15 15:12:07.763: INFO: 	Container weave ready: true, restart count 0
Dec 15 15:12:07.763: INFO: 	Container weave-npc ready: true, restart count 0
Dec 15 15:12:07.763: INFO: kube-proxy-976zl started at 2019-08-04 09:01:39 +0000 UTC (0+1 container statuses recorded)
Dec 15 15:12:07.763: INFO: 	Container kube-proxy ready: true, restart count 0
W1215 15:12:07.767864       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 15 15:12:07.851: INFO: 
Latency metrics for node iruya-node
Dec 15 15:12:07.851: INFO: 
Logging node info for node iruya-server-sfge57q7djm7
Dec 15 15:12:07.861: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:iruya-server-sfge57q7djm7,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/iruya-server-sfge57q7djm7,UID:67f2a658-4743-4118-95e7-463a23bcd212,ResourceVersion:16777668,Generation:0,CreationTimestamp:2019-08-04 08:52:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: iruya-server-sfge57q7djm7,kubernetes.io/os: linux,node-role.kubernetes.io/master: ,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.96.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-08-04 08:53:00 +0000 UTC 2019-08-04 08:53:00 +0000 UTC WeaveIsUp Weave pod has set this} {MemoryPressure False 2019-12-15 15:11:38 +0000 UTC 2019-08-04 08:52:04 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-12-15 15:11:38 +0000 UTC 2019-08-04 08:52:04 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-12-15 15:11:38 +0000 UTC 2019-08-04 08:52:04 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-12-15 15:11:38 +0000 UTC 2019-08-04 08:53:09 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.96.2.216} {Hostname iruya-server-sfge57q7djm7}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:78bacef342604a51913cae58dd95802b,SystemUUID:78BACEF3-4260-4A51-913C-AE58DD95802B,BootID:db143d3a-01b3-4483-b23e-e72adff2b28d,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.15.1,KubeProxyVersion:v1.15.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 373099368} {[k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7 k8s.gcr.io/etcd:3.3.10] 258116302} {[k8s.gcr.io/kube-apiserver@sha256:304a1c38707834062ee87df62ef329d52a8b9a3e70459565d0a396479073f54c k8s.gcr.io/kube-apiserver:v1.15.1] 206827454} {[k8s.gcr.io/kube-controller-manager@sha256:9abae95e428e228fe8f6d1630d55e79e018037460f3731312805c0f37471e4bf k8s.gcr.io/kube-controller-manager:v1.15.1] 158722622} {[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2] 148150868} {[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine] 126894770} {[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine] 123781643} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 98945667} {[k8s.gcr.io/kube-proxy@sha256:08186f4897488e96cb098dd8d1d931af9a6ea718bb8737bf44bb76e42075f0ce k8s.gcr.io/kube-proxy:v1.15.1] 82408284} {[k8s.gcr.io/kube-scheduler@sha256:d0ee18a9593013fbc44b1920e4930f29b664b59a3958749763cb33b57e0e8956 k8s.gcr.io/kube-scheduler:v1.15.1] 81107582} {[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6] 57345321} {[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2] 49569458} {[k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4 k8s.gcr.io/coredns:1.3.1] 40303560} {[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine] 29331594} {[nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 nginx:1.15-alpine] 16087791} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472} {[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest] 239840}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Dec 15 15:12:07.862: INFO: 
Logging kubelet events for node iruya-server-sfge57q7djm7
Dec 15 15:12:07.867: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7
Dec 15 15:12:07.887: INFO: kube-scheduler-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:43 +0000 UTC (0+1 container statuses recorded)
Dec 15 15:12:07.887: INFO: 	Container kube-scheduler ready: true, restart count 7
Dec 15 15:12:07.887: INFO: coredns-5c98db65d4-xx8w8 started at 2019-08-04 08:53:12 +0000 UTC (0+1 container statuses recorded)
Dec 15 15:12:07.887: INFO: 	Container coredns ready: true, restart count 0
Dec 15 15:12:07.887: INFO: etcd-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:38 +0000 UTC (0+1 container statuses recorded)
Dec 15 15:12:07.887: INFO: 	Container etcd ready: true, restart count 0
Dec 15 15:12:07.887: INFO: weave-net-bzl4d started at 2019-08-04 08:52:37 +0000 UTC (0+2 container statuses recorded)
Dec 15 15:12:07.887: INFO: 	Container weave ready: true, restart count 0
Dec 15 15:12:07.887: INFO: 	Container weave-npc ready: true, restart count 0
Dec 15 15:12:07.887: INFO: coredns-5c98db65d4-bm4gs started at 2019-08-04 08:53:12 +0000 UTC (0+1 container statuses recorded)
Dec 15 15:12:07.887: INFO: 	Container coredns ready: true, restart count 0
Dec 15 15:12:07.888: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:42 +0000 UTC (0+1 container statuses recorded)
Dec 15 15:12:07.888: INFO: 	Container kube-controller-manager ready: true, restart count 10
Dec 15 15:12:07.888: INFO: kube-proxy-58v95 started at 2019-08-04 08:52:37 +0000 UTC (0+1 container statuses recorded)
Dec 15 15:12:07.888: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 15 15:12:07.888: INFO: kube-apiserver-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:39 +0000 UTC (0+1 container statuses recorded)
Dec 15 15:12:07.888: INFO: 	Container kube-apiserver ready: true, restart count 0
W1215 15:12:07.901402       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 15 15:12:07.974: INFO: 
Latency metrics for node iruya-server-sfge57q7djm7
Dec 15 15:12:07.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7136" for this suite.
Dec 15 15:12:29.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 15:12:30.113: INFO: namespace statefulset-7136 deletion completed in 22.133002442s

• Failure [347.844 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Should recreate evicted statefulset [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697

    Dec 15 15:11:54.604: Pod ss-0 expected to be re-created at least once

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:769
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 15:12:30.114: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-6qrh
STEP: Creating a pod to test atomic-volume-subpath
Dec 15 15:12:30.211: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-6qrh" in namespace "subpath-8967" to be "success or failure"
Dec 15 15:12:30.219: INFO: Pod "pod-subpath-test-projected-6qrh": Phase="Pending", Reason="", readiness=false. Elapsed: 7.65831ms
Dec 15 15:12:32.608: INFO: Pod "pod-subpath-test-projected-6qrh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.396448867s
Dec 15 15:12:34.630: INFO: Pod "pod-subpath-test-projected-6qrh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.418058954s
Dec 15 15:12:36.647: INFO: Pod "pod-subpath-test-projected-6qrh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.435865777s
Dec 15 15:12:39.095: INFO: Pod "pod-subpath-test-projected-6qrh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.883064502s
Dec 15 15:12:41.110: INFO: Pod "pod-subpath-test-projected-6qrh": Phase="Running", Reason="", readiness=true. Elapsed: 10.898303042s
Dec 15 15:12:43.121: INFO: Pod "pod-subpath-test-projected-6qrh": Phase="Running", Reason="", readiness=true. Elapsed: 12.90996693s
Dec 15 15:12:45.137: INFO: Pod "pod-subpath-test-projected-6qrh": Phase="Running", Reason="", readiness=true. Elapsed: 14.925516428s
Dec 15 15:12:47.147: INFO: Pod "pod-subpath-test-projected-6qrh": Phase="Running", Reason="", readiness=true. Elapsed: 16.935517129s
Dec 15 15:12:49.154: INFO: Pod "pod-subpath-test-projected-6qrh": Phase="Running", Reason="", readiness=true. Elapsed: 18.942359725s
Dec 15 15:12:51.169: INFO: Pod "pod-subpath-test-projected-6qrh": Phase="Running", Reason="", readiness=true. Elapsed: 20.958028881s
Dec 15 15:12:53.180: INFO: Pod "pod-subpath-test-projected-6qrh": Phase="Running", Reason="", readiness=true. Elapsed: 22.968082901s
Dec 15 15:12:55.190: INFO: Pod "pod-subpath-test-projected-6qrh": Phase="Running", Reason="", readiness=true. Elapsed: 24.978265937s
Dec 15 15:12:57.206: INFO: Pod "pod-subpath-test-projected-6qrh": Phase="Running", Reason="", readiness=true. Elapsed: 26.995035017s
Dec 15 15:12:59.320: INFO: Pod "pod-subpath-test-projected-6qrh": Phase="Running", Reason="", readiness=true. Elapsed: 29.10828605s
Dec 15 15:13:01.329: INFO: Pod "pod-subpath-test-projected-6qrh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.117191476s
STEP: Saw pod success
Dec 15 15:13:01.329: INFO: Pod "pod-subpath-test-projected-6qrh" satisfied condition "success or failure"
Dec 15 15:13:01.334: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-6qrh container test-container-subpath-projected-6qrh: 
STEP: delete the pod
Dec 15 15:13:01.411: INFO: Waiting for pod pod-subpath-test-projected-6qrh to disappear
Dec 15 15:13:01.456: INFO: Pod pod-subpath-test-projected-6qrh no longer exists
STEP: Deleting pod pod-subpath-test-projected-6qrh
Dec 15 15:13:01.456: INFO: Deleting pod "pod-subpath-test-projected-6qrh" in namespace "subpath-8967"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 15:13:01.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8967" for this suite.
Dec 15 15:13:07.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 15:13:07.605: INFO: namespace subpath-8967 deletion completed in 6.13825394s

• [SLOW TEST:37.491 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 15:13:07.606: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Dec 15 15:13:14.724: INFO: 0 pods remaining
Dec 15 15:13:14.724: INFO: 0 pods has nil DeletionTimestamp
Dec 15 15:13:14.724: INFO: 
STEP: Gathering metrics
W1215 15:13:15.384804       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 15 15:13:15.384: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 15:13:15.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5653" for this suite.
Dec 15 15:13:27.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 15:13:27.769: INFO: namespace gc-5653 deletion completed in 12.379635572s

• [SLOW TEST:20.163 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 15:13:27.770: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 15 15:13:28.059: INFO: Create a RollingUpdate DaemonSet
Dec 15 15:13:28.063: INFO: Check that daemon pods launch on every node of the cluster
Dec 15 15:13:28.089: INFO: Number of nodes with available pods: 0
Dec 15 15:13:28.089: INFO: Node iruya-node is running more than one daemon pod
Dec 15 15:13:29.112: INFO: Number of nodes with available pods: 0
Dec 15 15:13:29.112: INFO: Node iruya-node is running more than one daemon pod
Dec 15 15:13:31.193: INFO: Number of nodes with available pods: 0
Dec 15 15:13:31.193: INFO: Node iruya-node is running more than one daemon pod
Dec 15 15:13:32.112: INFO: Number of nodes with available pods: 0
Dec 15 15:13:32.112: INFO: Node iruya-node is running more than one daemon pod
Dec 15 15:13:33.104: INFO: Number of nodes with available pods: 0
Dec 15 15:13:33.104: INFO: Node iruya-node is running more than one daemon pod
Dec 15 15:13:35.236: INFO: Number of nodes with available pods: 0
Dec 15 15:13:35.236: INFO: Node iruya-node is running more than one daemon pod
Dec 15 15:13:36.108: INFO: Number of nodes with available pods: 0
Dec 15 15:13:36.108: INFO: Node iruya-node is running more than one daemon pod
Dec 15 15:13:37.373: INFO: Number of nodes with available pods: 0
Dec 15 15:13:37.373: INFO: Node iruya-node is running more than one daemon pod
Dec 15 15:13:38.098: INFO: Number of nodes with available pods: 0
Dec 15 15:13:38.098: INFO: Node iruya-node is running more than one daemon pod
Dec 15 15:13:39.106: INFO: Number of nodes with available pods: 1
Dec 15 15:13:39.106: INFO: Node iruya-node is running more than one daemon pod
Dec 15 15:13:40.109: INFO: Number of nodes with available pods: 2
Dec 15 15:13:40.109: INFO: Number of running nodes: 2, number of available pods: 2
Dec 15 15:13:40.109: INFO: Update the DaemonSet to trigger a rollout
Dec 15 15:13:40.121: INFO: Updating DaemonSet daemon-set
Dec 15 15:13:47.322: INFO: Roll back the DaemonSet before rollout is complete
Dec 15 15:13:47.346: INFO: Updating DaemonSet daemon-set
Dec 15 15:13:47.346: INFO: Make sure DaemonSet rollback is complete
Dec 15 15:13:47.371: INFO: Wrong image for pod: daemon-set-qp2bm. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Dec 15 15:13:47.371: INFO: Pod daemon-set-qp2bm is not available
Dec 15 15:13:48.401: INFO: Wrong image for pod: daemon-set-qp2bm. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Dec 15 15:13:48.401: INFO: Pod daemon-set-qp2bm is not available
Dec 15 15:13:49.407: INFO: Wrong image for pod: daemon-set-qp2bm. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Dec 15 15:13:49.407: INFO: Pod daemon-set-qp2bm is not available
Dec 15 15:13:50.399: INFO: Wrong image for pod: daemon-set-qp2bm. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Dec 15 15:13:50.399: INFO: Pod daemon-set-qp2bm is not available
Dec 15 15:13:51.397: INFO: Pod daemon-set-rg7h6 is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7102, will wait for the garbage collector to delete the pods
Dec 15 15:13:51.470: INFO: Deleting DaemonSet.extensions daemon-set took: 10.437068ms
Dec 15 15:13:51.971: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.654884ms
Dec 15 15:14:07.977: INFO: Number of nodes with available pods: 0
Dec 15 15:14:07.977: INFO: Number of running nodes: 0, number of available pods: 0
Dec 15 15:14:07.981: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7102/daemonsets","resourceVersion":"16778107"},"items":null}

Dec 15 15:14:07.983: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7102/pods","resourceVersion":"16778107"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 15:14:07.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7102" for this suite.
Dec 15 15:14:14.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 15:14:14.163: INFO: namespace daemonsets-7102 deletion completed in 6.160291544s

• [SLOW TEST:46.393 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 15:14:14.163: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-b662724c-adf4-4422-a69e-992de8a59ec9
STEP: Creating secret with name s-test-opt-upd-ef0d2aa0-4865-4458-bc3f-76319c72ba57
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-b662724c-adf4-4422-a69e-992de8a59ec9
STEP: Updating secret s-test-opt-upd-ef0d2aa0-4865-4458-bc3f-76319c72ba57
STEP: Creating secret with name s-test-opt-create-db403ea6-7ae0-4685-91c6-d43bea88904a
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 15:15:36.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4204" for this suite.
Dec 15 15:16:00.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 15:16:00.201: INFO: namespace projected-4204 deletion completed in 24.158725554s

• [SLOW TEST:106.038 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 15:16:00.201: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Dec 15 15:16:00.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1161'
Dec 15 15:16:00.586: INFO: stderr: ""
Dec 15 15:16:00.586: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 15 15:16:00.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1161'
Dec 15 15:16:00.926: INFO: stderr: ""
Dec 15 15:16:00.927: INFO: stdout: "update-demo-nautilus-c98xz update-demo-nautilus-q45t5 "
Dec 15 15:16:00.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c98xz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1161'
Dec 15 15:16:01.128: INFO: stderr: ""
Dec 15 15:16:01.128: INFO: stdout: ""
Dec 15 15:16:01.128: INFO: update-demo-nautilus-c98xz is created but not running
Dec 15 15:16:06.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1161'
Dec 15 15:16:07.298: INFO: stderr: ""
Dec 15 15:16:07.299: INFO: stdout: "update-demo-nautilus-c98xz update-demo-nautilus-q45t5 "
Dec 15 15:16:07.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c98xz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1161'
Dec 15 15:16:07.802: INFO: stderr: ""
Dec 15 15:16:07.802: INFO: stdout: ""
Dec 15 15:16:07.802: INFO: update-demo-nautilus-c98xz is created but not running
Dec 15 15:16:12.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1161'
Dec 15 15:16:12.954: INFO: stderr: ""
Dec 15 15:16:12.954: INFO: stdout: "update-demo-nautilus-c98xz update-demo-nautilus-q45t5 "
Dec 15 15:16:12.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c98xz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1161'
Dec 15 15:16:13.073: INFO: stderr: ""
Dec 15 15:16:13.073: INFO: stdout: "true"
Dec 15 15:16:13.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c98xz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1161'
Dec 15 15:16:13.187: INFO: stderr: ""
Dec 15 15:16:13.187: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 15 15:16:13.187: INFO: validating pod update-demo-nautilus-c98xz
Dec 15 15:16:13.214: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 15 15:16:13.214: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 15 15:16:13.214: INFO: update-demo-nautilus-c98xz is verified up and running
Dec 15 15:16:13.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q45t5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1161'
Dec 15 15:16:13.335: INFO: stderr: ""
Dec 15 15:16:13.336: INFO: stdout: "true"
Dec 15 15:16:13.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q45t5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1161'
Dec 15 15:16:13.488: INFO: stderr: ""
Dec 15 15:16:13.488: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 15 15:16:13.488: INFO: validating pod update-demo-nautilus-q45t5
Dec 15 15:16:13.498: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 15 15:16:13.498: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 15 15:16:13.498: INFO: update-demo-nautilus-q45t5 is verified up and running
STEP: rolling-update to new replication controller
Dec 15 15:16:13.500: INFO: scanned /root for discovery docs: 
Dec 15 15:16:13.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-1161'
Dec 15 15:16:45.200: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 15 15:16:45.200: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 15 15:16:45.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1161'
Dec 15 15:16:45.377: INFO: stderr: ""
Dec 15 15:16:45.377: INFO: stdout: "update-demo-kitten-5qt6k update-demo-kitten-kddqc update-demo-nautilus-q45t5 "
STEP: Replicas for name=update-demo: expected=2 actual=3
Dec 15 15:16:50.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1161'
Dec 15 15:16:50.653: INFO: stderr: ""
Dec 15 15:16:50.653: INFO: stdout: "update-demo-kitten-5qt6k update-demo-kitten-kddqc "
Dec 15 15:16:50.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-5qt6k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1161'
Dec 15 15:16:50.751: INFO: stderr: ""
Dec 15 15:16:50.751: INFO: stdout: "true"
Dec 15 15:16:50.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-5qt6k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1161'
Dec 15 15:16:50.880: INFO: stderr: ""
Dec 15 15:16:50.880: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 15 15:16:50.880: INFO: validating pod update-demo-kitten-5qt6k
Dec 15 15:16:50.905: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 15 15:16:50.905: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 15 15:16:50.905: INFO: update-demo-kitten-5qt6k is verified up and running
Dec 15 15:16:50.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-kddqc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1161'
Dec 15 15:16:51.071: INFO: stderr: ""
Dec 15 15:16:51.071: INFO: stdout: "true"
Dec 15 15:16:51.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-kddqc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1161'
Dec 15 15:16:51.152: INFO: stderr: ""
Dec 15 15:16:51.152: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 15 15:16:51.152: INFO: validating pod update-demo-kitten-kddqc
Dec 15 15:16:51.179: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 15 15:16:51.179: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 15 15:16:51.179: INFO: update-demo-kitten-kddqc is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 15:16:51.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1161" for this suite.
Dec 15 15:17:17.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 15:17:17.356: INFO: namespace kubectl-1161 deletion completed in 26.173148805s

• [SLOW TEST:77.155 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 15:17:17.357: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 15 15:17:17.483: INFO: Pod name rollover-pod: Found 0 pods out of 1
Dec 15 15:17:22.496: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 15 15:17:26.519: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Dec 15 15:17:28.531: INFO: Creating deployment "test-rollover-deployment"
Dec 15 15:17:28.582: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Dec 15 15:17:30.627: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Dec 15 15:17:30.645: INFO: Ensure that both replica sets have 1 created replica
Dec 15 15:17:30.653: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Dec 15 15:17:30.665: INFO: Updating deployment test-rollover-deployment
Dec 15 15:17:30.665: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Dec 15 15:17:34.749: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Dec 15 15:17:34.785: INFO: Make sure deployment "test-rollover-deployment" is complete
Dec 15 15:17:34.869: INFO: all replica sets need to contain the pod-template-hash label
Dec 15 15:17:34.869: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712019848, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712019848, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712019852, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712019848, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 15 15:17:36.890: INFO: all replica sets need to contain the pod-template-hash label
Dec 15 15:17:36.890: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712019848, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712019848, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712019852, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712019848, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 15 15:17:38.888: INFO: all replica sets need to contain the pod-template-hash label
Dec 15 15:17:38.888: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712019848, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712019848, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712019852, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712019848, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 15 15:17:40.892: INFO: all replica sets need to contain the pod-template-hash label
Dec 15 15:17:40.892: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712019848, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712019848, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712019860, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712019848, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 15 15:17:42.885: INFO: all replica sets need to contain the pod-template-hash label
Dec 15 15:17:42.885: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712019848, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712019848, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712019860, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712019848, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 15 15:17:44.899: INFO: all replica sets need to contain the pod-template-hash label
Dec 15 15:17:44.900: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712019848, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712019848, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712019860, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712019848, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 15 15:17:46.888: INFO: all replica sets need to contain the pod-template-hash label
Dec 15 15:17:46.888: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712019848, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712019848, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712019860, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712019848, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 15 15:17:48.894: INFO: all replica sets need to contain the pod-template-hash label
Dec 15 15:17:48.894: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712019848, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712019848, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712019860, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712019848, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 15 15:17:50.902: INFO: 
Dec 15 15:17:50.903: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 15 15:17:50.914: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-4307,SelfLink:/apis/apps/v1/namespaces/deployment-4307/deployments/test-rollover-deployment,UID:61eaeeba-3b45-4ce5-9cec-16e62b014e30,ResourceVersion:16778668,Generation:2,CreationTimestamp:2019-12-15 15:17:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-15 15:17:28 +0000 UTC 2019-12-15 15:17:28 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-15 15:17:50 +0000 UTC 2019-12-15 15:17:28 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Dec 15 15:17:50.918: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-4307,SelfLink:/apis/apps/v1/namespaces/deployment-4307/replicasets/test-rollover-deployment-854595fc44,UID:0873c1e7-cb8d-4244-a4e4-cb2b939eff3a,ResourceVersion:16778659,Generation:2,CreationTimestamp:2019-12-15 15:17:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 61eaeeba-3b45-4ce5-9cec-16e62b014e30 0xc002eac887 0xc002eac888}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 15 15:17:50.918: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Dec 15 15:17:50.918: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-4307,SelfLink:/apis/apps/v1/namespaces/deployment-4307/replicasets/test-rollover-controller,UID:288264ff-3017-47fc-ab93-7f01dd5b47b9,ResourceVersion:16778667,Generation:2,CreationTimestamp:2019-12-15 15:17:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 61eaeeba-3b45-4ce5-9cec-16e62b014e30 0xc002eac78f 0xc002eac7a0}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 15 15:17:50.918: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-4307,SelfLink:/apis/apps/v1/namespaces/deployment-4307/replicasets/test-rollover-deployment-9b8b997cf,UID:bf1620b2-aef7-4df4-baca-974115793aa3,ResourceVersion:16778623,Generation:2,CreationTimestamp:2019-12-15 15:17:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 61eaeeba-3b45-4ce5-9cec-16e62b014e30 0xc002eac950 0xc002eac951}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 15 15:17:50.926: INFO: Pod "test-rollover-deployment-854595fc44-p68z8" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-p68z8,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-4307,SelfLink:/api/v1/namespaces/deployment-4307/pods/test-rollover-deployment-854595fc44-p68z8,UID:1118f9b9-90c2-401b-8f79-0a0e88962128,ResourceVersion:16778643,Generation:0,CreationTimestamp:2019-12-15 15:17:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 0873c1e7-cb8d-4244-a4e4-cb2b939eff3a 0xc0031e82b7 0xc0031e82b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-fzxkv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fzxkv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-fzxkv true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0031e8330} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0031e8350}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 15:17:33 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 15:17:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 15:17:40 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-15 15:17:32 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2019-12-15 15:17:33 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-15 15:17:40 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://7ed1cdbcb9fcefb6e37e59d4a129bd65defb232b0870d0cee7dd3b00074b3c16}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 15:17:50.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4307" for this suite.
Dec 15 15:17:56.955: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 15:17:57.082: INFO: namespace deployment-4307 deletion completed in 6.15145312s

• [SLOW TEST:39.725 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 15:17:57.084: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Dec 15 15:17:58.660: INFO: Pod name wrapped-volume-race-2c2caac8-1d09-46e2-828c-f4e4b3efc775: Found 0 pods out of 5
Dec 15 15:18:04.448: INFO: Pod name wrapped-volume-race-2c2caac8-1d09-46e2-828c-f4e4b3efc775: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-2c2caac8-1d09-46e2-828c-f4e4b3efc775 in namespace emptydir-wrapper-887, will wait for the garbage collector to delete the pods
Dec 15 15:18:30.655: INFO: Deleting ReplicationController wrapped-volume-race-2c2caac8-1d09-46e2-828c-f4e4b3efc775 took: 56.73191ms
Dec 15 15:18:31.057: INFO: Terminating ReplicationController wrapped-volume-race-2c2caac8-1d09-46e2-828c-f4e4b3efc775 pods took: 401.392465ms
STEP: Creating RC which spawns configmap-volume pods
Dec 15 15:19:17.178: INFO: Pod name wrapped-volume-race-2f0eb901-9bf0-4b4e-be7f-611b78810e5a: Found 0 pods out of 5
Dec 15 15:19:22.275: INFO: Pod name wrapped-volume-race-2f0eb901-9bf0-4b4e-be7f-611b78810e5a: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-2f0eb901-9bf0-4b4e-be7f-611b78810e5a in namespace emptydir-wrapper-887, will wait for the garbage collector to delete the pods
Dec 15 15:19:56.478: INFO: Deleting ReplicationController wrapped-volume-race-2f0eb901-9bf0-4b4e-be7f-611b78810e5a took: 22.806677ms
Dec 15 15:19:56.879: INFO: Terminating ReplicationController wrapped-volume-race-2f0eb901-9bf0-4b4e-be7f-611b78810e5a pods took: 401.276086ms
STEP: Creating RC which spawns configmap-volume pods
Dec 15 15:20:46.766: INFO: Pod name wrapped-volume-race-1a58c0f8-e6a2-495d-91ea-c4745e8ed8e6: Found 0 pods out of 5
Dec 15 15:20:51.787: INFO: Pod name wrapped-volume-race-1a58c0f8-e6a2-495d-91ea-c4745e8ed8e6: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-1a58c0f8-e6a2-495d-91ea-c4745e8ed8e6 in namespace emptydir-wrapper-887, will wait for the garbage collector to delete the pods
Dec 15 15:21:22.051: INFO: Deleting ReplicationController wrapped-volume-race-1a58c0f8-e6a2-495d-91ea-c4745e8ed8e6 took: 20.713586ms
Dec 15 15:21:22.452: INFO: Terminating ReplicationController wrapped-volume-race-1a58c0f8-e6a2-495d-91ea-c4745e8ed8e6 pods took: 400.645497ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 15:22:08.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-887" for this suite.
Dec 15 15:22:18.393: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 15:22:18.488: INFO: namespace emptydir-wrapper-887 deletion completed in 10.123479591s

• [SLOW TEST:261.405 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 15:22:18.489: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-394b76e9-ef17-4669-b712-8be117ccfd9b
Dec 15 15:22:18.687: INFO: Pod name my-hostname-basic-394b76e9-ef17-4669-b712-8be117ccfd9b: Found 0 pods out of 1
Dec 15 15:22:23.702: INFO: Pod name my-hostname-basic-394b76e9-ef17-4669-b712-8be117ccfd9b: Found 1 pods out of 1
Dec 15 15:22:23.703: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-394b76e9-ef17-4669-b712-8be117ccfd9b" are running
Dec 15 15:22:29.715: INFO: Pod "my-hostname-basic-394b76e9-ef17-4669-b712-8be117ccfd9b-dg94m" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-15 15:22:20 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-15 15:22:20 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-394b76e9-ef17-4669-b712-8be117ccfd9b]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-15 15:22:20 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-394b76e9-ef17-4669-b712-8be117ccfd9b]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-15 15:22:18 +0000 UTC Reason: Message:}])
Dec 15 15:22:29.715: INFO: Trying to dial the pod
Dec 15 15:22:34.741: INFO: Controller my-hostname-basic-394b76e9-ef17-4669-b712-8be117ccfd9b: Got expected result from replica 1 [my-hostname-basic-394b76e9-ef17-4669-b712-8be117ccfd9b-dg94m]: "my-hostname-basic-394b76e9-ef17-4669-b712-8be117ccfd9b-dg94m", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 15:22:34.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7325" for this suite.
Dec 15 15:22:40.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 15:22:40.833: INFO: namespace replication-controller-7325 deletion completed in 6.086564448s

• [SLOW TEST:22.344 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 15:22:40.833: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 15 15:22:40.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-7057'
Dec 15 15:22:43.124: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 15 15:22:43.124: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Dec 15 15:22:43.291: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-dgqkj]
Dec 15 15:22:43.291: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-dgqkj" in namespace "kubectl-7057" to be "running and ready"
Dec 15 15:22:43.300: INFO: Pod "e2e-test-nginx-rc-dgqkj": Phase="Pending", Reason="", readiness=false. Elapsed: 9.556079ms
Dec 15 15:22:45.307: INFO: Pod "e2e-test-nginx-rc-dgqkj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015998422s
Dec 15 15:22:47.322: INFO: Pod "e2e-test-nginx-rc-dgqkj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031550588s
Dec 15 15:22:49.332: INFO: Pod "e2e-test-nginx-rc-dgqkj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041434208s
Dec 15 15:22:51.354: INFO: Pod "e2e-test-nginx-rc-dgqkj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062812897s
Dec 15 15:22:53.365: INFO: Pod "e2e-test-nginx-rc-dgqkj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.07373739s
Dec 15 15:22:55.375: INFO: Pod "e2e-test-nginx-rc-dgqkj": Phase="Running", Reason="", readiness=true. Elapsed: 12.083673666s
Dec 15 15:22:55.375: INFO: Pod "e2e-test-nginx-rc-dgqkj" satisfied condition "running and ready"
Dec 15 15:22:55.375: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-dgqkj]
Dec 15 15:22:55.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-7057'
Dec 15 15:22:55.652: INFO: stderr: ""
Dec 15 15:22:55.653: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Dec 15 15:22:55.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-7057'
Dec 15 15:22:55.794: INFO: stderr: ""
Dec 15 15:22:55.794: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 15:22:55.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7057" for this suite.
Dec 15 15:23:17.825: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 15:23:17.965: INFO: namespace kubectl-7057 deletion completed in 22.164968332s

• [SLOW TEST:37.133 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 15:23:17.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-fbbc50f9-c7f0-4b71-97b6-11f9f93ff530
STEP: Creating configMap with name cm-test-opt-upd-03d4df63-64e1-4ddd-bf6a-604e75d869e3
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-fbbc50f9-c7f0-4b71-97b6-11f9f93ff530
STEP: Updating configmap cm-test-opt-upd-03d4df63-64e1-4ddd-bf6a-604e75d869e3
STEP: Creating configMap with name cm-test-opt-create-1726af13-7115-44c9-814a-82f02ee55037
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 15:23:34.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8736" for this suite.
Dec 15 15:24:12.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 15:24:12.709: INFO: namespace configmap-8736 deletion completed in 38.180014853s

• [SLOW TEST:54.743 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 15:24:12.710: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8621.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8621.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8621.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8621.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8621.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8621.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8621.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8621.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8621.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8621.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8621.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8621.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8621.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 39.204.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.204.39_udp@PTR;check="$$(dig +tcp +noall +answer +search 39.204.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.204.39_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8621.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8621.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8621.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8621.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8621.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8621.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8621.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8621.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8621.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8621.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8621.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8621.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8621.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 39.204.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.204.39_udp@PTR;check="$$(dig +tcp +noall +answer +search 39.204.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.204.39_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 15 15:24:25.075: INFO: Unable to read wheezy_udp@dns-test-service.dns-8621.svc.cluster.local from pod dns-8621/dns-test-b5ecc99f-3c90-4ab5-ba2d-d99de5d9b919: the server could not find the requested resource (get pods dns-test-b5ecc99f-3c90-4ab5-ba2d-d99de5d9b919)
Dec 15 15:24:25.118: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8621.svc.cluster.local from pod dns-8621/dns-test-b5ecc99f-3c90-4ab5-ba2d-d99de5d9b919: the server could not find the requested resource (get pods dns-test-b5ecc99f-3c90-4ab5-ba2d-d99de5d9b919)
Dec 15 15:24:25.124: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8621.svc.cluster.local from pod dns-8621/dns-test-b5ecc99f-3c90-4ab5-ba2d-d99de5d9b919: the server could not find the requested resource (get pods dns-test-b5ecc99f-3c90-4ab5-ba2d-d99de5d9b919)
Dec 15 15:24:25.130: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8621.svc.cluster.local from pod dns-8621/dns-test-b5ecc99f-3c90-4ab5-ba2d-d99de5d9b919: the server could not find the requested resource (get pods dns-test-b5ecc99f-3c90-4ab5-ba2d-d99de5d9b919)
Dec 15 15:24:25.138: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-8621.svc.cluster.local from pod dns-8621/dns-test-b5ecc99f-3c90-4ab5-ba2d-d99de5d9b919: the server could not find the requested resource (get pods dns-test-b5ecc99f-3c90-4ab5-ba2d-d99de5d9b919)
Dec 15 15:24:25.144: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-8621.svc.cluster.local from pod dns-8621/dns-test-b5ecc99f-3c90-4ab5-ba2d-d99de5d9b919: the server could not find the requested resource (get pods dns-test-b5ecc99f-3c90-4ab5-ba2d-d99de5d9b919)
Dec 15 15:24:25.152: INFO: Unable to read wheezy_udp@PodARecord from pod dns-8621/dns-test-b5ecc99f-3c90-4ab5-ba2d-d99de5d9b919: the server could not find the requested resource (get pods dns-test-b5ecc99f-3c90-4ab5-ba2d-d99de5d9b919)
Dec 15 15:24:25.158: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-8621/dns-test-b5ecc99f-3c90-4ab5-ba2d-d99de5d9b919: the server could not find the requested resource (get pods dns-test-b5ecc99f-3c90-4ab5-ba2d-d99de5d9b919)
Dec 15 15:24:25.163: INFO: Unable to read 10.102.204.39_udp@PTR from pod dns-8621/dns-test-b5ecc99f-3c90-4ab5-ba2d-d99de5d9b919: the server could not find the requested resource (get pods dns-test-b5ecc99f-3c90-4ab5-ba2d-d99de5d9b919)
Dec 15 15:24:25.172: INFO: Unable to read 10.102.204.39_tcp@PTR from pod dns-8621/dns-test-b5ecc99f-3c90-4ab5-ba2d-d99de5d9b919: the server could not find the requested resource (get pods dns-test-b5ecc99f-3c90-4ab5-ba2d-d99de5d9b919)
Dec 15 15:24:25.180: INFO: Unable to read jessie_udp@dns-test-service.dns-8621.svc.cluster.local from pod dns-8621/dns-test-b5ecc99f-3c90-4ab5-ba2d-d99de5d9b919: the server could not find the requested resource (get pods dns-test-b5ecc99f-3c90-4ab5-ba2d-d99de5d9b919)
Dec 15 15:24:25.187: INFO: Unable to read jessie_tcp@dns-test-service.dns-8621.svc.cluster.local from pod dns-8621/dns-test-b5ecc99f-3c90-4ab5-ba2d-d99de5d9b919: the server could not find the requested resource (get pods dns-test-b5ecc99f-3c90-4ab5-ba2d-d99de5d9b919)
Dec 15 15:24:25.197: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8621.svc.cluster.local from pod dns-8621/dns-test-b5ecc99f-3c90-4ab5-ba2d-d99de5d9b919: the server could not find the requested resource (get pods dns-test-b5ecc99f-3c90-4ab5-ba2d-d99de5d9b919)
Dec 15 15:24:25.202: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8621.svc.cluster.local from pod dns-8621/dns-test-b5ecc99f-3c90-4ab5-ba2d-d99de5d9b919: the server could not find the requested resource (get pods dns-test-b5ecc99f-3c90-4ab5-ba2d-d99de5d9b919)
Dec 15 15:24:25.206: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-8621.svc.cluster.local from pod dns-8621/dns-test-b5ecc99f-3c90-4ab5-ba2d-d99de5d9b919: the server could not find the requested resource (get pods dns-test-b5ecc99f-3c90-4ab5-ba2d-d99de5d9b919)
Dec 15 15:24:25.213: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-8621.svc.cluster.local from pod dns-8621/dns-test-b5ecc99f-3c90-4ab5-ba2d-d99de5d9b919: the server could not find the requested resource (get pods dns-test-b5ecc99f-3c90-4ab5-ba2d-d99de5d9b919)
Dec 15 15:24:25.216: INFO: Unable to read jessie_udp@PodARecord from pod dns-8621/dns-test-b5ecc99f-3c90-4ab5-ba2d-d99de5d9b919: the server could not find the requested resource (get pods dns-test-b5ecc99f-3c90-4ab5-ba2d-d99de5d9b919)
Dec 15 15:24:25.222: INFO: Unable to read jessie_tcp@PodARecord from pod dns-8621/dns-test-b5ecc99f-3c90-4ab5-ba2d-d99de5d9b919: the server could not find the requested resource (get pods dns-test-b5ecc99f-3c90-4ab5-ba2d-d99de5d9b919)
Dec 15 15:24:25.225: INFO: Unable to read 10.102.204.39_udp@PTR from pod dns-8621/dns-test-b5ecc99f-3c90-4ab5-ba2d-d99de5d9b919: the server could not find the requested resource (get pods dns-test-b5ecc99f-3c90-4ab5-ba2d-d99de5d9b919)
Dec 15 15:24:25.229: INFO: Unable to read 10.102.204.39_tcp@PTR from pod dns-8621/dns-test-b5ecc99f-3c90-4ab5-ba2d-d99de5d9b919: the server could not find the requested resource (get pods dns-test-b5ecc99f-3c90-4ab5-ba2d-d99de5d9b919)
Dec 15 15:24:25.229: INFO: Lookups using dns-8621/dns-test-b5ecc99f-3c90-4ab5-ba2d-d99de5d9b919 failed for: [wheezy_udp@dns-test-service.dns-8621.svc.cluster.local wheezy_tcp@dns-test-service.dns-8621.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8621.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8621.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-8621.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-8621.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.102.204.39_udp@PTR 10.102.204.39_tcp@PTR jessie_udp@dns-test-service.dns-8621.svc.cluster.local jessie_tcp@dns-test-service.dns-8621.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8621.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8621.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-8621.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-8621.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.102.204.39_udp@PTR 10.102.204.39_tcp@PTR]

Dec 15 15:24:30.345: INFO: DNS probes using dns-8621/dns-test-b5ecc99f-3c90-4ab5-ba2d-d99de5d9b919 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 15:24:30.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8621" for this suite.
Dec 15 15:24:36.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 15:24:36.915: INFO: namespace dns-8621 deletion completed in 6.285407727s

• [SLOW TEST:24.205 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 15:24:36.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 15 15:24:37.017: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f9300816-63bc-47e2-aec0-fbf4dfd0754a" in namespace "downward-api-2553" to be "success or failure"
Dec 15 15:24:37.037: INFO: Pod "downwardapi-volume-f9300816-63bc-47e2-aec0-fbf4dfd0754a": Phase="Pending", Reason="", readiness=false. Elapsed: 20.29088ms
Dec 15 15:24:39.055: INFO: Pod "downwardapi-volume-f9300816-63bc-47e2-aec0-fbf4dfd0754a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038584487s
Dec 15 15:24:41.066: INFO: Pod "downwardapi-volume-f9300816-63bc-47e2-aec0-fbf4dfd0754a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048945231s
Dec 15 15:24:43.076: INFO: Pod "downwardapi-volume-f9300816-63bc-47e2-aec0-fbf4dfd0754a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059078161s
Dec 15 15:24:45.083: INFO: Pod "downwardapi-volume-f9300816-63bc-47e2-aec0-fbf4dfd0754a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.065851879s
STEP: Saw pod success
Dec 15 15:24:45.083: INFO: Pod "downwardapi-volume-f9300816-63bc-47e2-aec0-fbf4dfd0754a" satisfied condition "success or failure"
Dec 15 15:24:45.091: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-f9300816-63bc-47e2-aec0-fbf4dfd0754a container client-container: 
STEP: delete the pod
Dec 15 15:24:45.261: INFO: Waiting for pod downwardapi-volume-f9300816-63bc-47e2-aec0-fbf4dfd0754a to disappear
Dec 15 15:24:45.276: INFO: Pod downwardapi-volume-f9300816-63bc-47e2-aec0-fbf4dfd0754a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 15:24:45.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2553" for this suite.
Dec 15 15:24:51.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 15:24:51.525: INFO: namespace downward-api-2553 deletion completed in 6.232339917s

• [SLOW TEST:14.610 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 15:24:51.525: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 15 15:24:51.706: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Dec 15 15:24:51.790: INFO: Number of nodes with available pods: 0
Dec 15 15:24:51.790: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Dec 15 15:24:51.849: INFO: Number of nodes with available pods: 0
Dec 15 15:24:51.849: INFO: Node iruya-node is running more than one daemon pod
Dec 15 15:24:52.866: INFO: Number of nodes with available pods: 0
Dec 15 15:24:52.866: INFO: Node iruya-node is running more than one daemon pod
Dec 15 15:24:53.871: INFO: Number of nodes with available pods: 0
Dec 15 15:24:53.871: INFO: Node iruya-node is running more than one daemon pod
Dec 15 15:24:55.109: INFO: Number of nodes with available pods: 0
Dec 15 15:24:55.109: INFO: Node iruya-node is running more than one daemon pod
Dec 15 15:24:55.868: INFO: Number of nodes with available pods: 0
Dec 15 15:24:55.868: INFO: Node iruya-node is running more than one daemon pod
Dec 15 15:24:56.865: INFO: Number of nodes with available pods: 0
Dec 15 15:24:56.865: INFO: Node iruya-node is running more than one daemon pod
Dec 15 15:24:57.872: INFO: Number of nodes with available pods: 0
Dec 15 15:24:57.872: INFO: Node iruya-node is running more than one daemon pod
Dec 15 15:24:58.870: INFO: Number of nodes with available pods: 0
Dec 15 15:24:58.870: INFO: Node iruya-node is running more than one daemon pod
Dec 15 15:24:59.862: INFO: Number of nodes with available pods: 0
Dec 15 15:24:59.862: INFO: Node iruya-node is running more than one daemon pod
Dec 15 15:25:00.876: INFO: Number of nodes with available pods: 1
Dec 15 15:25:00.876: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Dec 15 15:25:00.939: INFO: Number of nodes with available pods: 1
Dec 15 15:25:00.939: INFO: Number of running nodes: 0, number of available pods: 1
Dec 15 15:25:01.951: INFO: Number of nodes with available pods: 0
Dec 15 15:25:01.951: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Dec 15 15:25:01.985: INFO: Number of nodes with available pods: 0
Dec 15 15:25:01.985: INFO: Node iruya-node is running more than one daemon pod
Dec 15 15:25:02.996: INFO: Number of nodes with available pods: 0
Dec 15 15:25:02.996: INFO: Node iruya-node is running more than one daemon pod
Dec 15 15:25:03.997: INFO: Number of nodes with available pods: 0
Dec 15 15:25:03.997: INFO: Node iruya-node is running more than one daemon pod
Dec 15 15:25:04.992: INFO: Number of nodes with available pods: 0
Dec 15 15:25:04.992: INFO: Node iruya-node is running more than one daemon pod
Dec 15 15:25:05.994: INFO: Number of nodes with available pods: 0
Dec 15 15:25:05.994: INFO: Node iruya-node is running more than one daemon pod
Dec 15 15:25:06.994: INFO: Number of nodes with available pods: 0
Dec 15 15:25:06.994: INFO: Node iruya-node is running more than one daemon pod
Dec 15 15:25:07.992: INFO: Number of nodes with available pods: 0
Dec 15 15:25:07.992: INFO: Node iruya-node is running more than one daemon pod
Dec 15 15:25:08.992: INFO: Number of nodes with available pods: 0
Dec 15 15:25:08.992: INFO: Node iruya-node is running more than one daemon pod
Dec 15 15:25:09.995: INFO: Number of nodes with available pods: 0
Dec 15 15:25:09.995: INFO: Node iruya-node is running more than one daemon pod
Dec 15 15:25:10.992: INFO: Number of nodes with available pods: 0
Dec 15 15:25:10.992: INFO: Node iruya-node is running more than one daemon pod
Dec 15 15:25:12.011: INFO: Number of nodes with available pods: 0
Dec 15 15:25:12.011: INFO: Node iruya-node is running more than one daemon pod
Dec 15 15:25:12.993: INFO: Number of nodes with available pods: 0
Dec 15 15:25:12.993: INFO: Node iruya-node is running more than one daemon pod
Dec 15 15:25:13.997: INFO: Number of nodes with available pods: 0
Dec 15 15:25:13.997: INFO: Node iruya-node is running more than one daemon pod
Dec 15 15:25:14.994: INFO: Number of nodes with available pods: 0
Dec 15 15:25:14.994: INFO: Node iruya-node is running more than one daemon pod
Dec 15 15:25:16.039: INFO: Number of nodes with available pods: 0
Dec 15 15:25:16.039: INFO: Node iruya-node is running more than one daemon pod
Dec 15 15:25:16.992: INFO: Number of nodes with available pods: 0
Dec 15 15:25:16.992: INFO: Node iruya-node is running more than one daemon pod
Dec 15 15:25:18.001: INFO: Number of nodes with available pods: 0
Dec 15 15:25:18.001: INFO: Node iruya-node is running more than one daemon pod
Dec 15 15:25:18.995: INFO: Number of nodes with available pods: 0
Dec 15 15:25:18.995: INFO: Node iruya-node is running more than one daemon pod
Dec 15 15:25:20.001: INFO: Number of nodes with available pods: 0
Dec 15 15:25:20.001: INFO: Node iruya-node is running more than one daemon pod
Dec 15 15:25:21.003: INFO: Number of nodes with available pods: 0
Dec 15 15:25:21.003: INFO: Node iruya-node is running more than one daemon pod
Dec 15 15:25:22.007: INFO: Number of nodes with available pods: 0
Dec 15 15:25:22.007: INFO: Node iruya-node is running more than one daemon pod
Dec 15 15:25:22.997: INFO: Number of nodes with available pods: 0
Dec 15 15:25:22.997: INFO: Node iruya-node is running more than one daemon pod
Dec 15 15:25:23.998: INFO: Number of nodes with available pods: 0
Dec 15 15:25:23.998: INFO: Node iruya-node is running more than one daemon pod
Dec 15 15:25:25.000: INFO: Number of nodes with available pods: 1
Dec 15 15:25:25.000: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5965, will wait for the garbage collector to delete the pods
Dec 15 15:25:25.090: INFO: Deleting DaemonSet.extensions daemon-set took: 18.334808ms
Dec 15 15:25:25.391: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.700079ms
Dec 15 15:25:36.626: INFO: Number of nodes with available pods: 0
Dec 15 15:25:36.626: INFO: Number of running nodes: 0, number of available pods: 0
Dec 15 15:25:36.632: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5965/daemonsets","resourceVersion":"16780401"},"items":null}

Dec 15 15:25:36.636: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5965/pods","resourceVersion":"16780401"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 15:25:36.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5965" for this suite.
Dec 15 15:25:42.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 15:25:42.813: INFO: namespace daemonsets-5965 deletion completed in 6.128562072s

• [SLOW TEST:51.287 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 15 15:25:42.813: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 15 15:26:03.526: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 15 15:26:03.541: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 15 15:26:05.541: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 15 15:26:05.550: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 15 15:26:07.541: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 15 15:26:07.554: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 15 15:26:09.542: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 15 15:26:09.559: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 15 15:26:11.541: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 15 15:26:11.558: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 15 15:26:13.541: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 15 15:26:13.551: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 15 15:26:15.541: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 15 15:26:15.551: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 15 15:26:17.541: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 15 15:26:17.552: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 15 15:26:19.542: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 15 15:26:19.555: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 15 15:26:21.541: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 15 15:26:21.549: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 15 15:26:23.541: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 15 15:26:23.549: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 15 15:26:25.541: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 15 15:26:25.549: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 15 15:26:25.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4751" for this suite.
Dec 15 15:26:47.609: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 15 15:26:47.727: INFO: namespace container-lifecycle-hook-4751 deletion completed in 22.140585856s

• [SLOW TEST:64.914 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSDec 15 15:26:47.727: INFO: Running AfterSuite actions on all nodes
Dec 15 15:26:47.727: INFO: Running AfterSuite actions on node 1
Dec 15 15:26:47.727: INFO: Skipping dumping logs from cluster


Summarizing 1 Failure:

[Fail] [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] [It] Should recreate evicted statefulset [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:769

Ran 215 of 4412 Specs in 9037.448 seconds
FAIL! -- 214 Passed | 1 Failed | 0 Pending | 4197 Skipped
--- FAIL: TestE2E (9038.10s)
FAIL