I0826 22:33:28.060881 7 e2e.go:243] Starting e2e run "19792bb5-9998-4e02-9ec7-df5bf5aadd94" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1598481196 - Will randomize all specs Will run 215 of 4413 specs Aug 26 22:33:29.446: INFO: >>> kubeConfig: /root/.kube/config Aug 26 22:33:29.509: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 26 22:33:29.703: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 26 22:33:29.892: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 26 22:33:29.892: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Aug 26 22:33:29.892: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 26 22:33:29.956: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Aug 26 22:33:29.956: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 26 22:33:29.956: INFO: e2e test version: v1.15.12 Aug 26 22:33:29.961: INFO: kube-apiserver version: v1.15.12 SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:33:29.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap Aug 26 22:33:30.120: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-b7e1adef-fde1-40d1-b8a5-9fd5655e1ff0 STEP: Creating a pod to test consume configMaps Aug 26 22:33:30.171: INFO: Waiting up to 5m0s for pod "pod-configmaps-250409b2-4444-46f5-a819-657dd0bbc4da" in namespace "configmap-2113" to be "success or failure" Aug 26 22:33:30.199: INFO: Pod "pod-configmaps-250409b2-4444-46f5-a819-657dd0bbc4da": Phase="Pending", Reason="", readiness=false. Elapsed: 27.771511ms Aug 26 22:33:32.209: INFO: Pod "pod-configmaps-250409b2-4444-46f5-a819-657dd0bbc4da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037751554s Aug 26 22:33:34.230: INFO: Pod "pod-configmaps-250409b2-4444-46f5-a819-657dd0bbc4da": Phase="Running", Reason="", readiness=true. Elapsed: 4.059450034s Aug 26 22:33:36.267: INFO: Pod "pod-configmaps-250409b2-4444-46f5-a819-657dd0bbc4da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.09611s STEP: Saw pod success Aug 26 22:33:36.267: INFO: Pod "pod-configmaps-250409b2-4444-46f5-a819-657dd0bbc4da" satisfied condition "success or failure" Aug 26 22:33:36.272: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-250409b2-4444-46f5-a819-657dd0bbc4da container configmap-volume-test: STEP: delete the pod Aug 26 22:33:36.427: INFO: Waiting for pod pod-configmaps-250409b2-4444-46f5-a819-657dd0bbc4da to disappear Aug 26 22:33:36.432: INFO: Pod pod-configmaps-250409b2-4444-46f5-a819-657dd0bbc4da no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:33:36.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2113" for this suite. Aug 26 22:33:42.470: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:33:42.630: INFO: namespace configmap-2113 deletion completed in 6.177642684s • [SLOW TEST:12.663 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:33:42.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-9421 [It] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-9421 STEP: Creating statefulset with conflicting port in namespace statefulset-9421 STEP: Waiting until pod test-pod will start running in namespace statefulset-9421 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9421 Aug 26 22:33:48.981: INFO: Observed stateful pod in namespace: statefulset-9421, name: ss-0, uid: e006c26c-379f-42c5-bbd6-09d6f079c39c, status phase: Pending. Waiting for statefulset controller to delete. Aug 26 22:33:49.257: INFO: Observed stateful pod in namespace: statefulset-9421, name: ss-0, uid: e006c26c-379f-42c5-bbd6-09d6f079c39c, status phase: Failed. Waiting for statefulset controller to delete. Aug 26 22:33:49.283: INFO: Observed stateful pod in namespace: statefulset-9421, name: ss-0, uid: e006c26c-379f-42c5-bbd6-09d6f079c39c, status phase: Failed. Waiting for statefulset controller to delete. Aug 26 22:33:49.313: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-9421 STEP: Removing pod with conflicting port in namespace statefulset-9421 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-9421 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Aug 26 22:33:55.381: INFO: Deleting all statefulset in ns statefulset-9421 Aug 26 22:33:55.389: INFO: Scaling statefulset ss to 0 Aug 26 22:34:05.435: INFO: Waiting for statefulset status.replicas updated to 0 Aug 26 22:34:05.439: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:34:05.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9421" for this suite. Aug 26 22:34:11.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:34:11.623: INFO: namespace statefulset-9421 deletion completed in 6.154578422s • [SLOW TEST:28.985 seconds] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:34:11.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 26 22:34:11.809: INFO: Waiting up to 5m0s for pod "downwardapi-volume-576a4681-9bd1-45c0-a524-79a45e4a467d" in namespace "projected-5465" to be "success or failure" Aug 26 22:34:11.865: INFO: Pod "downwardapi-volume-576a4681-9bd1-45c0-a524-79a45e4a467d": Phase="Pending", Reason="", readiness=false. Elapsed: 55.42198ms Aug 26 22:34:13.871: INFO: Pod "downwardapi-volume-576a4681-9bd1-45c0-a524-79a45e4a467d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061769548s Aug 26 22:34:15.877: INFO: Pod "downwardapi-volume-576a4681-9bd1-45c0-a524-79a45e4a467d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068398909s Aug 26 22:34:18.009: INFO: Pod "downwardapi-volume-576a4681-9bd1-45c0-a524-79a45e4a467d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.199444414s STEP: Saw pod success Aug 26 22:34:18.009: INFO: Pod "downwardapi-volume-576a4681-9bd1-45c0-a524-79a45e4a467d" satisfied condition "success or failure" Aug 26 22:34:18.015: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-576a4681-9bd1-45c0-a524-79a45e4a467d container client-container: STEP: delete the pod Aug 26 22:34:18.223: INFO: Waiting for pod downwardapi-volume-576a4681-9bd1-45c0-a524-79a45e4a467d to disappear Aug 26 22:34:18.296: INFO: Pod downwardapi-volume-576a4681-9bd1-45c0-a524-79a45e4a467d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:34:18.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5465" for this suite. Aug 26 22:34:24.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:34:24.817: INFO: namespace projected-5465 deletion completed in 6.511848754s • [SLOW TEST:13.192 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:34:24.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Aug 26 22:34:24.908: INFO: Waiting up to 5m0s for pod "downward-api-ddd63f6d-f7ea-4e4f-94e8-c1ad45de1d17" in namespace "downward-api-5872" to be "success or failure" Aug 26 22:34:24.913: INFO: Pod "downward-api-ddd63f6d-f7ea-4e4f-94e8-c1ad45de1d17": Phase="Pending", Reason="", readiness=false. Elapsed: 4.976081ms Aug 26 22:34:26.918: INFO: Pod "downward-api-ddd63f6d-f7ea-4e4f-94e8-c1ad45de1d17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010606369s Aug 26 22:34:29.004: INFO: Pod "downward-api-ddd63f6d-f7ea-4e4f-94e8-c1ad45de1d17": Phase="Running", Reason="", readiness=true. Elapsed: 4.095816348s Aug 26 22:34:31.010: INFO: Pod "downward-api-ddd63f6d-f7ea-4e4f-94e8-c1ad45de1d17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.102236494s STEP: Saw pod success Aug 26 22:34:31.010: INFO: Pod "downward-api-ddd63f6d-f7ea-4e4f-94e8-c1ad45de1d17" satisfied condition "success or failure" Aug 26 22:34:31.080: INFO: Trying to get logs from node iruya-worker2 pod downward-api-ddd63f6d-f7ea-4e4f-94e8-c1ad45de1d17 container dapi-container: STEP: delete the pod Aug 26 22:34:31.148: INFO: Waiting for pod downward-api-ddd63f6d-f7ea-4e4f-94e8-c1ad45de1d17 to disappear Aug 26 22:34:31.277: INFO: Pod downward-api-ddd63f6d-f7ea-4e4f-94e8-c1ad45de1d17 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:34:31.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5872" for this suite. Aug 26 22:34:39.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:34:39.719: INFO: namespace downward-api-5872 deletion completed in 8.434722046s • [SLOW TEST:14.901 seconds] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:34:39.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Aug 26 22:34:41.166: INFO: Waiting up to 5m0s for pod "downward-api-d3e7509b-3116-4a0a-9515-8a2e0975cc96" in namespace "downward-api-2515" to be "success or failure" Aug 26 22:34:41.599: INFO: Pod "downward-api-d3e7509b-3116-4a0a-9515-8a2e0975cc96": Phase="Pending", Reason="", readiness=false. Elapsed: 432.736617ms Aug 26 22:34:43.605: INFO: Pod "downward-api-d3e7509b-3116-4a0a-9515-8a2e0975cc96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.439054088s Aug 26 22:34:45.650: INFO: Pod "downward-api-d3e7509b-3116-4a0a-9515-8a2e0975cc96": Phase="Pending", Reason="", readiness=false. Elapsed: 4.483962067s Aug 26 22:34:47.657: INFO: Pod "downward-api-d3e7509b-3116-4a0a-9515-8a2e0975cc96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.490478413s STEP: Saw pod success Aug 26 22:34:47.657: INFO: Pod "downward-api-d3e7509b-3116-4a0a-9515-8a2e0975cc96" satisfied condition "success or failure" Aug 26 22:34:47.660: INFO: Trying to get logs from node iruya-worker pod downward-api-d3e7509b-3116-4a0a-9515-8a2e0975cc96 container dapi-container: STEP: delete the pod Aug 26 22:34:47.705: INFO: Waiting for pod downward-api-d3e7509b-3116-4a0a-9515-8a2e0975cc96 to disappear Aug 26 22:34:47.769: INFO: Pod downward-api-d3e7509b-3116-4a0a-9515-8a2e0975cc96 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:34:47.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2515" for this suite. Aug 26 22:34:55.822: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:34:55.964: INFO: namespace downward-api-2515 deletion completed in 8.188444276s • [SLOW TEST:16.243 seconds] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:34:55.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:35:00.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5869" for this suite. Aug 26 22:35:06.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:35:06.391: INFO: namespace emptydir-wrapper-5869 deletion completed in 6.161740019s • [SLOW TEST:10.424 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:35:06.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-85dd5766-609e-41d7-9c5e-a00b77c06539 STEP: Creating a pod to test consume configMaps Aug 26 22:35:06.510: INFO: Waiting up to 5m0s for pod "pod-configmaps-989da927-8149-4e1c-8b54-0af8d86db0dc" in namespace "configmap-4133" to be "success or failure" Aug 26 22:35:06.566: INFO: Pod "pod-configmaps-989da927-8149-4e1c-8b54-0af8d86db0dc": Phase="Pending", Reason="", readiness=false. Elapsed: 56.504676ms Aug 26 22:35:08.573: INFO: Pod "pod-configmaps-989da927-8149-4e1c-8b54-0af8d86db0dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063119965s Aug 26 22:35:10.669: INFO: Pod "pod-configmaps-989da927-8149-4e1c-8b54-0af8d86db0dc": Phase="Running", Reason="", readiness=true. Elapsed: 4.159600499s Aug 26 22:35:12.677: INFO: Pod "pod-configmaps-989da927-8149-4e1c-8b54-0af8d86db0dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.16668025s STEP: Saw pod success Aug 26 22:35:12.677: INFO: Pod "pod-configmaps-989da927-8149-4e1c-8b54-0af8d86db0dc" satisfied condition "success or failure" Aug 26 22:35:12.683: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-989da927-8149-4e1c-8b54-0af8d86db0dc container configmap-volume-test: STEP: delete the pod Aug 26 22:35:12.706: INFO: Waiting for pod pod-configmaps-989da927-8149-4e1c-8b54-0af8d86db0dc to disappear Aug 26 22:35:12.762: INFO: Pod pod-configmaps-989da927-8149-4e1c-8b54-0af8d86db0dc no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:35:12.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4133" for this suite. Aug 26 22:35:18.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:35:18.950: INFO: namespace configmap-4133 deletion completed in 6.179448555s • [SLOW TEST:12.558 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:35:18.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 26 22:35:24.817: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:35:24.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1200" for this suite. Aug 26 22:35:30.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:35:31.118: INFO: namespace container-runtime-1200 deletion completed in 6.192855726s • [SLOW TEST:12.167 seconds] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:35:31.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Aug 26 22:35:31.250: INFO: Waiting up to 5m0s for pod "pod-c527959e-6d32-4592-a81f-4ddd93534bdc" in namespace "emptydir-9843" to be "success or failure" Aug 26 22:35:31.279: INFO: Pod "pod-c527959e-6d32-4592-a81f-4ddd93534bdc": Phase="Pending", Reason="", readiness=false. Elapsed: 29.029365ms Aug 26 22:35:33.283: INFO: Pod "pod-c527959e-6d32-4592-a81f-4ddd93534bdc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03368083s Aug 26 22:35:35.290: INFO: Pod "pod-c527959e-6d32-4592-a81f-4ddd93534bdc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040694089s STEP: Saw pod success Aug 26 22:35:35.291: INFO: Pod "pod-c527959e-6d32-4592-a81f-4ddd93534bdc" satisfied condition "success or failure" Aug 26 22:35:35.295: INFO: Trying to get logs from node iruya-worker2 pod pod-c527959e-6d32-4592-a81f-4ddd93534bdc container test-container: STEP: delete the pod Aug 26 22:35:35.664: INFO: Waiting for pod pod-c527959e-6d32-4592-a81f-4ddd93534bdc to disappear Aug 26 22:35:35.681: INFO: Pod pod-c527959e-6d32-4592-a81f-4ddd93534bdc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:35:35.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9843" for this suite. Aug 26 22:35:41.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:35:41.896: INFO: namespace emptydir-9843 deletion completed in 6.165706857s • [SLOW TEST:10.774 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:35:41.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Aug 26 22:35:46.656: INFO: Successfully updated pod "labelsupdate76e486ee-3207-4001-b1ed-64e9476f9934" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:35:50.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-869" for this suite. Aug 26 22:36:12.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:36:12.826: INFO: namespace downward-api-869 deletion completed in 22.138333922s • [SLOW TEST:30.928 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:36:12.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Aug 26 22:36:12.936: INFO: Waiting up to 5m0s for pod "downward-api-82623f09-ae68-401c-8dcb-b4be8c5156d7" in namespace "downward-api-7627" to be "success or failure" Aug 26 22:36:12.945: INFO: Pod "downward-api-82623f09-ae68-401c-8dcb-b4be8c5156d7": Phase="Pending", Reason="", readiness=false. Elapsed: 9.358759ms Aug 26 22:36:15.076: INFO: Pod "downward-api-82623f09-ae68-401c-8dcb-b4be8c5156d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140112496s Aug 26 22:36:17.082: INFO: Pod "downward-api-82623f09-ae68-401c-8dcb-b4be8c5156d7": Phase="Running", Reason="", readiness=true. Elapsed: 4.14651895s Aug 26 22:36:19.090: INFO: Pod "downward-api-82623f09-ae68-401c-8dcb-b4be8c5156d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.154137034s STEP: Saw pod success Aug 26 22:36:19.090: INFO: Pod "downward-api-82623f09-ae68-401c-8dcb-b4be8c5156d7" satisfied condition "success or failure" Aug 26 22:36:19.095: INFO: Trying to get logs from node iruya-worker pod downward-api-82623f09-ae68-401c-8dcb-b4be8c5156d7 container dapi-container: STEP: delete the pod Aug 26 22:36:19.114: INFO: Waiting for pod downward-api-82623f09-ae68-401c-8dcb-b4be8c5156d7 to disappear Aug 26 22:36:19.118: INFO: Pod downward-api-82623f09-ae68-401c-8dcb-b4be8c5156d7 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:36:19.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7627" for this suite. Aug 26 22:36:25.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:36:25.303: INFO: namespace downward-api-7627 deletion completed in 6.17708724s • [SLOW TEST:12.473 seconds] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:36:25.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 26 22:36:25.409: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:36:26.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3082" for this suite. Aug 26 22:36:32.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:36:32.784: INFO: namespace custom-resource-definition-3082 deletion completed in 6.270961759s • [SLOW TEST:7.479 seconds] [sig-api-machinery] CustomResourceDefinition resources /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:36:32.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 26 22:36:32.988: INFO: Waiting up to 5m0s for pod "downwardapi-volume-832b4a54-b243-47f6-b1d2-c1ef207b253f" in namespace "downward-api-5034" to be "success or failure" Aug 26 22:36:33.089: INFO: Pod "downwardapi-volume-832b4a54-b243-47f6-b1d2-c1ef207b253f": Phase="Pending", Reason="", readiness=false. Elapsed: 100.662883ms Aug 26 22:36:35.100: INFO: Pod "downwardapi-volume-832b4a54-b243-47f6-b1d2-c1ef207b253f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111996306s Aug 26 22:36:37.196: INFO: Pod "downwardapi-volume-832b4a54-b243-47f6-b1d2-c1ef207b253f": Phase="Running", Reason="", readiness=true. Elapsed: 4.207810725s Aug 26 22:36:39.204: INFO: Pod "downwardapi-volume-832b4a54-b243-47f6-b1d2-c1ef207b253f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.215540619s STEP: Saw pod success Aug 26 22:36:39.204: INFO: Pod "downwardapi-volume-832b4a54-b243-47f6-b1d2-c1ef207b253f" satisfied condition "success or failure" Aug 26 22:36:39.209: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-832b4a54-b243-47f6-b1d2-c1ef207b253f container client-container: STEP: delete the pod Aug 26 22:36:39.247: INFO: Waiting for pod downwardapi-volume-832b4a54-b243-47f6-b1d2-c1ef207b253f to disappear Aug 26 22:36:39.334: INFO: Pod downwardapi-volume-832b4a54-b243-47f6-b1d2-c1ef207b253f no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:36:39.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5034" for this suite. Aug 26 22:36:45.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:36:45.782: INFO: namespace downward-api-5034 deletion completed in 6.404555237s • [SLOW TEST:12.996 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:36:45.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0826 22:37:27.565821 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 26 22:37:27.567: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:37:27.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1323" for this suite. Aug 26 22:37:37.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:37:37.823: INFO: namespace gc-1323 deletion completed in 10.245706809s • [SLOW TEST:52.039 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:37:37.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Aug 26 22:37:38.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-8132' Aug 26 22:37:45.375: INFO: stderr: "" Aug 26 22:37:45.375: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Aug 26 22:37:50.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-8132 -o json' Aug 26 22:37:51.697: INFO: stderr: "" Aug 26 22:37:51.697: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-08-26T22:37:45Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-8132\",\n \"resourceVersion\": \"3032700\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-8132/pods/e2e-test-nginx-pod\",\n \"uid\": \"47dbb0f7-4aa7-41e6-a5a5-11c9f13f5c01\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-brskt\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-brskt\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-brskt\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-26T22:37:45Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-26T22:37:49Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-26T22:37:49Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-26T22:37:45Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://4f2d8f1df43aff2070dfd09da52a094e596662e88a825515b7024c8f8bfc7dfe\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-08-26T22:37:48Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.9\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.46\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-08-26T22:37:45Z\"\n }\n}\n" STEP: replace the image in the pod Aug 26 22:37:51.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-8132' Aug 26 22:37:53.596: INFO: stderr: "" Aug 26 22:37:53.596: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Aug 26 22:37:53.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-8132' Aug 26 22:37:57.461: INFO: stderr: "" Aug 26 22:37:57.461: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:37:57.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8132" for this suite. Aug 26 22:38:03.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:38:03.634: INFO: namespace kubectl-8132 deletion completed in 6.162184161s • [SLOW TEST:25.809 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:38:03.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Aug 26 22:38:03.741: INFO: Waiting up to 5m0s for pod "pod-29d91d1b-76e3-4fa3-a9ee-d677cbd39511" in namespace "emptydir-3915" to be "success or failure" Aug 26 22:38:03.753: INFO: Pod "pod-29d91d1b-76e3-4fa3-a9ee-d677cbd39511": Phase="Pending", Reason="", readiness=false. Elapsed: 11.492399ms Aug 26 22:38:05.758: INFO: Pod "pod-29d91d1b-76e3-4fa3-a9ee-d677cbd39511": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017210651s Aug 26 22:38:07.765: INFO: Pod "pod-29d91d1b-76e3-4fa3-a9ee-d677cbd39511": Phase="Running", Reason="", readiness=true. Elapsed: 4.023544567s Aug 26 22:38:09.772: INFO: Pod "pod-29d91d1b-76e3-4fa3-a9ee-d677cbd39511": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030460091s STEP: Saw pod success Aug 26 22:38:09.772: INFO: Pod "pod-29d91d1b-76e3-4fa3-a9ee-d677cbd39511" satisfied condition "success or failure" Aug 26 22:38:09.777: INFO: Trying to get logs from node iruya-worker pod pod-29d91d1b-76e3-4fa3-a9ee-d677cbd39511 container test-container: STEP: delete the pod Aug 26 22:38:09.846: INFO: Waiting for pod pod-29d91d1b-76e3-4fa3-a9ee-d677cbd39511 to disappear Aug 26 22:38:09.883: INFO: Pod pod-29d91d1b-76e3-4fa3-a9ee-d677cbd39511 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:38:09.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3915" for this suite. Aug 26 22:38:15.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:38:16.051: INFO: namespace emptydir-3915 deletion completed in 6.157213071s • [SLOW TEST:12.416 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:38:16.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-2b548bbe-79a4-4f87-aa31-00384effb932 STEP: Creating a pod to test consume secrets Aug 26 22:38:16.158: INFO: Waiting up to 5m0s for pod "pod-secrets-9f21e58a-3b2b-43fa-82d4-a28d24157128" in namespace "secrets-5494" to be "success or failure" Aug 26 22:38:16.195: INFO: Pod "pod-secrets-9f21e58a-3b2b-43fa-82d4-a28d24157128": Phase="Pending", Reason="", readiness=false. Elapsed: 36.594641ms Aug 26 22:38:18.399: INFO: Pod "pod-secrets-9f21e58a-3b2b-43fa-82d4-a28d24157128": Phase="Pending", Reason="", readiness=false. Elapsed: 2.240884863s Aug 26 22:38:20.405: INFO: Pod "pod-secrets-9f21e58a-3b2b-43fa-82d4-a28d24157128": Phase="Running", Reason="", readiness=true. Elapsed: 4.247175058s Aug 26 22:38:22.413: INFO: Pod "pod-secrets-9f21e58a-3b2b-43fa-82d4-a28d24157128": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.254990085s STEP: Saw pod success Aug 26 22:38:22.413: INFO: Pod "pod-secrets-9f21e58a-3b2b-43fa-82d4-a28d24157128" satisfied condition "success or failure" Aug 26 22:38:22.427: INFO: Trying to get logs from node iruya-worker pod pod-secrets-9f21e58a-3b2b-43fa-82d4-a28d24157128 container secret-volume-test: STEP: delete the pod Aug 26 22:38:22.529: INFO: Waiting for pod pod-secrets-9f21e58a-3b2b-43fa-82d4-a28d24157128 to disappear Aug 26 22:38:22.552: INFO: Pod pod-secrets-9f21e58a-3b2b-43fa-82d4-a28d24157128 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:38:22.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5494" for this suite. Aug 26 22:38:30.609: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:38:30.819: INFO: namespace secrets-5494 deletion completed in 8.259765946s • [SLOW TEST:14.766 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:38:30.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 26 22:38:30.989: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 26 22:38:37.034: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Aug 26 22:38:43.313: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-4271,SelfLink:/apis/apps/v1/namespaces/deployment-4271/deployments/test-cleanup-deployment,UID:4c0f3cd8-380a-4c95-9c9c-1b322a8d0e29,ResourceVersion:3032957,Generation:1,CreationTimestamp:2020-08-26 22:38:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 1,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-08-26 22:38:37 +0000 UTC 2020-08-26 22:38:37 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-08-26 22:38:43 +0000 UTC 2020-08-26 22:38:37 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-cleanup-deployment-55bbcbc84c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Aug 26 22:38:43.323: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-4271,SelfLink:/apis/apps/v1/namespaces/deployment-4271/replicasets/test-cleanup-deployment-55bbcbc84c,UID:aa28d96f-04cf-46c1-9cec-add94a27dd15,ResourceVersion:3032946,Generation:1,CreationTimestamp:2020-08-26 22:38:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 4c0f3cd8-380a-4c95-9c9c-1b322a8d0e29 0x40034f6117 0x40034f6118}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Aug 26 22:38:43.337: INFO: Pod "test-cleanup-deployment-55bbcbc84c-dlxfz" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-dlxfz,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-4271,SelfLink:/api/v1/namespaces/deployment-4271/pods/test-cleanup-deployment-55bbcbc84c-dlxfz,UID:29e5064f-0714-46db-8f3c-3d0a0a392db2,ResourceVersion:3032945,Generation:0,CreationTimestamp:2020-08-26 22:38:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c aa28d96f-04cf-46c1-9cec-add94a27dd15 0x40034f6707 0x40034f6708}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mhbk6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mhbk6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-mhbk6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x40034f67b0} {node.kubernetes.io/unreachable Exists NoExecute 0x40034f67d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 22:38:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 22:38:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 22:38:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 22:38:37 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.12,StartTime:2020-08-26 22:38:37 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-08-26 22:38:41 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://975f1f318743d3eebbe24efba5cbb455df68043bce9776f366d2aecb94fe6327}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:38:43.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4271" for this suite. Aug 26 22:38:49.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:38:49.804: INFO: namespace deployment-4271 deletion completed in 6.458084137s • [SLOW TEST:18.983 seconds] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:38:49.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-794e7f57-df6d-4f4d-8152-a3620c4dd5c0 STEP: Creating a pod to test consume secrets Aug 26 22:38:49.980: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-687fb1cb-749e-4e3a-a5ab-5979aa96517e" in namespace "projected-3840" to be "success or failure" Aug 26 22:38:49.998: INFO: Pod "pod-projected-secrets-687fb1cb-749e-4e3a-a5ab-5979aa96517e": Phase="Pending", Reason="", readiness=false. Elapsed: 17.095924ms Aug 26 22:38:52.115: INFO: Pod "pod-projected-secrets-687fb1cb-749e-4e3a-a5ab-5979aa96517e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134155983s Aug 26 22:38:54.122: INFO: Pod "pod-projected-secrets-687fb1cb-749e-4e3a-a5ab-5979aa96517e": Phase="Running", Reason="", readiness=true. Elapsed: 4.141362643s Aug 26 22:38:56.129: INFO: Pod "pod-projected-secrets-687fb1cb-749e-4e3a-a5ab-5979aa96517e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.148697315s STEP: Saw pod success Aug 26 22:38:56.130: INFO: Pod "pod-projected-secrets-687fb1cb-749e-4e3a-a5ab-5979aa96517e" satisfied condition "success or failure" Aug 26 22:38:56.134: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-687fb1cb-749e-4e3a-a5ab-5979aa96517e container secret-volume-test: STEP: delete the pod Aug 26 22:38:56.215: INFO: Waiting for pod pod-projected-secrets-687fb1cb-749e-4e3a-a5ab-5979aa96517e to disappear Aug 26 22:38:56.324: INFO: Pod pod-projected-secrets-687fb1cb-749e-4e3a-a5ab-5979aa96517e no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:38:56.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3840" for this suite. Aug 26 22:39:02.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:39:02.497: INFO: namespace projected-3840 deletion completed in 6.164609854s • [SLOW TEST:12.687 seconds] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:39:02.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-5b224b3b-cd4e-44f8-b354-fd2d5bc7d7c9 STEP: Creating a pod to test consume configMaps Aug 26 22:39:02.986: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c4f1819f-0eff-406d-87f0-e4cc0fae898b" in namespace "projected-7288" to be "success or failure" Aug 26 22:39:03.010: INFO: Pod "pod-projected-configmaps-c4f1819f-0eff-406d-87f0-e4cc0fae898b": Phase="Pending", Reason="", readiness=false. Elapsed: 23.581172ms Aug 26 22:39:05.017: INFO: Pod "pod-projected-configmaps-c4f1819f-0eff-406d-87f0-e4cc0fae898b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031085845s Aug 26 22:39:07.023: INFO: Pod "pod-projected-configmaps-c4f1819f-0eff-406d-87f0-e4cc0fae898b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036441052s STEP: Saw pod success Aug 26 22:39:07.023: INFO: Pod "pod-projected-configmaps-c4f1819f-0eff-406d-87f0-e4cc0fae898b" satisfied condition "success or failure" Aug 26 22:39:07.033: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-c4f1819f-0eff-406d-87f0-e4cc0fae898b container projected-configmap-volume-test: STEP: delete the pod Aug 26 22:39:07.297: INFO: Waiting for pod pod-projected-configmaps-c4f1819f-0eff-406d-87f0-e4cc0fae898b to disappear Aug 26 22:39:07.444: INFO: Pod pod-projected-configmaps-c4f1819f-0eff-406d-87f0-e4cc0fae898b no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:39:07.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7288" for this suite. Aug 26 22:39:13.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:39:13.602: INFO: namespace projected-7288 deletion completed in 6.14767829s • [SLOW TEST:11.101 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:39:13.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Aug 26 22:39:13.747: INFO: Waiting up to 5m0s for pod "pod-72ae418d-297f-4bad-99b4-1c5e118321a8" in namespace "emptydir-6183" to be "success or failure" Aug 26 22:39:13.752: INFO: Pod "pod-72ae418d-297f-4bad-99b4-1c5e118321a8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.694833ms Aug 26 22:39:15.888: INFO: Pod "pod-72ae418d-297f-4bad-99b4-1c5e118321a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140440987s Aug 26 22:39:18.061: INFO: Pod "pod-72ae418d-297f-4bad-99b4-1c5e118321a8": Phase="Running", Reason="", readiness=true. Elapsed: 4.313219262s Aug 26 22:39:20.067: INFO: Pod "pod-72ae418d-297f-4bad-99b4-1c5e118321a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.319685132s STEP: Saw pod success Aug 26 22:39:20.067: INFO: Pod "pod-72ae418d-297f-4bad-99b4-1c5e118321a8" satisfied condition "success or failure" Aug 26 22:39:20.167: INFO: Trying to get logs from node iruya-worker pod pod-72ae418d-297f-4bad-99b4-1c5e118321a8 container test-container: STEP: delete the pod Aug 26 22:39:20.252: INFO: Waiting for pod pod-72ae418d-297f-4bad-99b4-1c5e118321a8 to disappear Aug 26 22:39:20.273: INFO: Pod pod-72ae418d-297f-4bad-99b4-1c5e118321a8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:39:20.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6183" for this suite. Aug 26 22:39:26.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:39:26.444: INFO: namespace emptydir-6183 deletion completed in 6.160724723s • [SLOW TEST:12.841 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:39:26.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Aug 26 22:39:35.048: INFO: Successfully updated pod "labelsupdatebed3ed03-d2ab-4262-85a7-f0896bbe970a" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:39:37.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2442" for this suite. Aug 26 22:39:59.123: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:39:59.251: INFO: namespace projected-2442 deletion completed in 22.143829161s • [SLOW TEST:32.805 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:39:59.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7513.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7513.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 26 22:40:07.518: INFO: DNS probes using dns-7513/dns-test-7e29bf76-8284-425d-8711-102dc958a6d0 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:40:07.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7513" for this suite. Aug 26 22:40:13.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:40:13.838: INFO: namespace dns-7513 deletion completed in 6.265696596s • [SLOW TEST:14.586 seconds] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:40:13.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:40:18.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2822" for this suite. Aug 26 22:40:24.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:40:24.231: INFO: namespace kubelet-test-2822 deletion completed in 6.177637644s • [SLOW TEST:10.392 seconds] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:40:24.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Aug 26 22:40:24.342: INFO: Waiting up to 5m0s for pod "pod-f5932aec-d712-4cbf-aa91-4a2a64be7eb9" in namespace "emptydir-9974" to be "success or failure" Aug 26 22:40:24.347: INFO: Pod "pod-f5932aec-d712-4cbf-aa91-4a2a64be7eb9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.906055ms Aug 26 22:40:26.456: INFO: Pod "pod-f5932aec-d712-4cbf-aa91-4a2a64be7eb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114407919s Aug 26 22:40:28.462: INFO: Pod "pod-f5932aec-d712-4cbf-aa91-4a2a64be7eb9": Phase="Running", Reason="", readiness=true. Elapsed: 4.119979031s Aug 26 22:40:30.469: INFO: Pod "pod-f5932aec-d712-4cbf-aa91-4a2a64be7eb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.127260615s STEP: Saw pod success Aug 26 22:40:30.469: INFO: Pod "pod-f5932aec-d712-4cbf-aa91-4a2a64be7eb9" satisfied condition "success or failure" Aug 26 22:40:30.474: INFO: Trying to get logs from node iruya-worker pod pod-f5932aec-d712-4cbf-aa91-4a2a64be7eb9 container test-container: STEP: delete the pod Aug 26 22:40:30.511: INFO: Waiting for pod pod-f5932aec-d712-4cbf-aa91-4a2a64be7eb9 to disappear Aug 26 22:40:30.546: INFO: Pod pod-f5932aec-d712-4cbf-aa91-4a2a64be7eb9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:40:30.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9974" for this suite. Aug 26 22:40:36.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:40:36.707: INFO: namespace emptydir-9974 deletion completed in 6.150712343s • [SLOW TEST:12.476 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:40:36.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:41:36.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8474" for this suite. Aug 26 22:41:58.850: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:41:58.997: INFO: namespace container-probe-8474 deletion completed in 22.17982785s • [SLOW TEST:82.289 seconds] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:41:59.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 26 22:41:59.096: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:42:05.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8806" for this suite. Aug 26 22:42:55.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:42:55.374: INFO: namespace pods-8806 deletion completed in 50.159390553s • [SLOW TEST:56.374 seconds] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:42:55.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Aug 26 22:42:55.460: INFO: Waiting up to 5m0s for pod "var-expansion-52262023-870a-4fd7-b64b-004940e187d0" in namespace "var-expansion-380" to be "success or failure" Aug 26 22:42:55.482: INFO: Pod "var-expansion-52262023-870a-4fd7-b64b-004940e187d0": Phase="Pending", Reason="", readiness=false. Elapsed: 22.12426ms Aug 26 22:42:57.490: INFO: Pod "var-expansion-52262023-870a-4fd7-b64b-004940e187d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029558752s Aug 26 22:42:59.497: INFO: Pod "var-expansion-52262023-870a-4fd7-b64b-004940e187d0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036475011s Aug 26 22:43:01.503: INFO: Pod "var-expansion-52262023-870a-4fd7-b64b-004940e187d0": Phase="Running", Reason="", readiness=true. Elapsed: 6.043227905s Aug 26 22:43:03.509: INFO: Pod "var-expansion-52262023-870a-4fd7-b64b-004940e187d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.049388902s STEP: Saw pod success Aug 26 22:43:03.510: INFO: Pod "var-expansion-52262023-870a-4fd7-b64b-004940e187d0" satisfied condition "success or failure" Aug 26 22:43:03.514: INFO: Trying to get logs from node iruya-worker pod var-expansion-52262023-870a-4fd7-b64b-004940e187d0 container dapi-container: STEP: delete the pod Aug 26 22:43:03.536: INFO: Waiting for pod var-expansion-52262023-870a-4fd7-b64b-004940e187d0 to disappear Aug 26 22:43:03.565: INFO: Pod var-expansion-52262023-870a-4fd7-b64b-004940e187d0 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:43:03.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-380" for this suite. Aug 26 22:43:09.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:43:10.303: INFO: namespace var-expansion-380 deletion completed in 6.728453248s • [SLOW TEST:14.929 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:43:10.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-8006 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8006 to expose endpoints map[] Aug 26 22:43:10.597: INFO: successfully validated that service multi-endpoint-test in namespace services-8006 exposes endpoints map[] (94.123821ms elapsed) STEP: Creating pod pod1 in namespace services-8006 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8006 to expose endpoints map[pod1:[100]] Aug 26 22:43:15.002: INFO: successfully validated that service multi-endpoint-test in namespace services-8006 exposes endpoints map[pod1:[100]] (4.395380809s elapsed) STEP: Creating pod pod2 in namespace services-8006 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8006 to expose endpoints map[pod1:[100] pod2:[101]] Aug 26 22:43:19.234: INFO: successfully validated that service multi-endpoint-test in namespace services-8006 exposes endpoints map[pod1:[100] pod2:[101]] (4.22581935s elapsed) STEP: Deleting pod pod1 in namespace services-8006 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8006 to expose endpoints map[pod2:[101]] Aug 26 22:43:19.293: INFO: successfully validated that service multi-endpoint-test in namespace services-8006 exposes endpoints map[pod2:[101]] (51.012443ms elapsed) STEP: Deleting pod pod2 in namespace services-8006 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8006 to expose endpoints map[] Aug 26 22:43:19.303: INFO: successfully validated that service multi-endpoint-test in namespace services-8006 exposes endpoints map[] (3.593218ms elapsed) [AfterEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:43:19.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8006" for this suite. Aug 26 22:43:43.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:43:43.671: INFO: namespace services-8006 deletion completed in 24.328917066s [AfterEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:33.367 seconds] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:43:43.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 26 22:43:43.834: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f67cd2d2-f769-47fe-829c-6744a165449b" in namespace "downward-api-9124" to be "success or failure" Aug 26 22:43:43.882: INFO: Pod "downwardapi-volume-f67cd2d2-f769-47fe-829c-6744a165449b": Phase="Pending", Reason="", readiness=false. Elapsed: 47.345914ms Aug 26 22:43:45.939: INFO: Pod "downwardapi-volume-f67cd2d2-f769-47fe-829c-6744a165449b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104202624s Aug 26 22:43:47.946: INFO: Pod "downwardapi-volume-f67cd2d2-f769-47fe-829c-6744a165449b": Phase="Running", Reason="", readiness=true. Elapsed: 4.11174017s Aug 26 22:43:49.994: INFO: Pod "downwardapi-volume-f67cd2d2-f769-47fe-829c-6744a165449b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.159350384s STEP: Saw pod success Aug 26 22:43:49.994: INFO: Pod "downwardapi-volume-f67cd2d2-f769-47fe-829c-6744a165449b" satisfied condition "success or failure" Aug 26 22:43:50.075: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-f67cd2d2-f769-47fe-829c-6744a165449b container client-container: STEP: delete the pod Aug 26 22:43:50.179: INFO: Waiting for pod downwardapi-volume-f67cd2d2-f769-47fe-829c-6744a165449b to disappear Aug 26 22:43:50.207: INFO: Pod downwardapi-volume-f67cd2d2-f769-47fe-829c-6744a165449b no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:43:50.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9124" for this suite. Aug 26 22:43:58.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:43:58.547: INFO: namespace downward-api-9124 deletion completed in 8.244738805s • [SLOW TEST:14.873 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:43:58.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Aug 26 22:43:58.680: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 26 22:43:58.693: INFO: Waiting for terminating namespaces to be deleted... Aug 26 22:43:58.699: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Aug 26 22:43:58.720: INFO: daemon-set-2gkvj from daemonsets-205 started at 2020-08-22 15:09:24 +0000 UTC (1 container statuses recorded) Aug 26 22:43:58.720: INFO: Container app ready: true, restart count 0 Aug 26 22:43:58.721: INFO: kube-proxy-5zw8s from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded) Aug 26 22:43:58.721: INFO: Container kube-proxy ready: true, restart count 0 Aug 26 22:43:58.721: INFO: daemon-set-qwbvn from daemonsets-4407 started at 2020-08-24 03:43:04 +0000 UTC (1 container statuses recorded) Aug 26 22:43:58.721: INFO: Container app ready: true, restart count 0 Aug 26 22:43:58.721: INFO: kindnet-nkf5n from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded) Aug 26 22:43:58.721: INFO: Container kindnet-cni ready: true, restart count 0 Aug 26 22:43:58.721: INFO: daemon-set-6z8rp from daemonsets-4068 started at 2020-08-25 22:38:22 +0000 UTC (1 container statuses recorded) Aug 26 22:43:58.721: INFO: Container app ready: true, restart count 0 Aug 26 22:43:58.721: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Aug 26 22:43:58.742: INFO: kindnet-xsdzz from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded) Aug 26 22:43:58.742: INFO: Container kindnet-cni ready: true, restart count 0 Aug 26 22:43:58.742: INFO: daemon-set-hlzh5 from daemonsets-205 started at 2020-08-22 15:09:24 +0000 UTC (1 container statuses recorded) Aug 26 22:43:58.742: INFO: Container app ready: true, restart count 0 Aug 26 22:43:58.742: INFO: daemon-set-fzgmk from daemonsets-4068 started at 2020-08-25 22:38:22 +0000 UTC (1 container statuses recorded) Aug 26 22:43:58.742: INFO: Container app ready: true, restart count 0 Aug 26 22:43:58.742: INFO: rally-f1ef6468-l5ier0p5-75d94c65c4-754br from c-rally-f1ef6468-ozvkojkk started at 2020-08-26 22:43:37 +0000 UTC (1 container statuses recorded) Aug 26 22:43:58.742: INFO: Container rally-f1ef6468-l5ier0p5 ready: true, restart count 0 Aug 26 22:43:58.742: INFO: rally-f1ef6468-l5ier0p5-6c878b5dc5-jcprx from c-rally-f1ef6468-ozvkojkk started at 2020-08-26 22:43:31 +0000 UTC (1 container statuses recorded) Aug 26 22:43:58.742: INFO: Container rally-f1ef6468-l5ier0p5 ready: true, restart count 0 Aug 26 22:43:58.742: INFO: kube-proxy-b98qt from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded) Aug 26 22:43:58.742: INFO: Container kube-proxy ready: true, restart count 0 Aug 26 22:43:58.742: INFO: daemon-set-nk8hf from daemonsets-4407 started at 2020-08-24 03:43:05 +0000 UTC (1 container statuses recorded) Aug 26 22:43:58.742: INFO: Container app ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-9bd3da5b-6b67-4391-b67d-5d40c0bde514 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-9bd3da5b-6b67-4391-b67d-5d40c0bde514 off the node iruya-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-9bd3da5b-6b67-4391-b67d-5d40c0bde514 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:44:09.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7460" for this suite. Aug 26 22:44:27.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:44:27.677: INFO: namespace sched-pred-7460 deletion completed in 18.132646721s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:29.127 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:44:27.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Aug 26 22:44:27.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Aug 26 22:44:29.119: INFO: stderr: "" Aug 26 22:44:29.119: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:35471\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:35471/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:44:29.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4382" for this suite. Aug 26 22:44:35.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:44:35.415: INFO: namespace kubectl-4382 deletion completed in 6.28649205s • [SLOW TEST:7.736 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:44:35.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 26 22:44:35.551: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Aug 26 22:44:35.574: INFO: Number of nodes with available pods: 0 Aug 26 22:44:35.575: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Aug 26 22:44:35.663: INFO: Number of nodes with available pods: 0 Aug 26 22:44:35.664: INFO: Node iruya-worker is running more than one daemon pod Aug 26 22:44:36.673: INFO: Number of nodes with available pods: 0 Aug 26 22:44:36.674: INFO: Node iruya-worker is running more than one daemon pod Aug 26 22:44:37.669: INFO: Number of nodes with available pods: 0 Aug 26 22:44:37.669: INFO: Node iruya-worker is running more than one daemon pod Aug 26 22:44:38.671: INFO: Number of nodes with available pods: 0 Aug 26 22:44:38.671: INFO: Node iruya-worker is running more than one daemon pod Aug 26 22:44:40.385: INFO: Number of nodes with available pods: 0 Aug 26 22:44:40.385: INFO: Node iruya-worker is running more than one daemon pod Aug 26 22:44:40.671: INFO: Number of nodes with available pods: 1 Aug 26 22:44:40.672: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Aug 26 22:44:40.822: INFO: Number of nodes with available pods: 1 Aug 26 22:44:40.822: INFO: Number of running nodes: 0, number of available pods: 1 Aug 26 22:44:41.830: INFO: Number of nodes with available pods: 0 Aug 26 22:44:41.830: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Aug 26 22:44:41.928: INFO: Number of nodes with available pods: 0 Aug 26 22:44:41.928: INFO: Node iruya-worker is running more than one daemon pod Aug 26 22:44:43.219: INFO: Number of nodes with available pods: 0 Aug 26 22:44:43.219: INFO: Node iruya-worker is running more than one daemon pod Aug 26 22:44:43.934: INFO: Number of nodes with available pods: 0 Aug 26 22:44:43.934: INFO: Node iruya-worker is running more than one daemon pod Aug 26 22:44:44.933: INFO: Number of nodes with available pods: 0 Aug 26 22:44:44.933: INFO: Node iruya-worker is running more than one daemon pod Aug 26 22:44:46.067: INFO: Number of nodes with available pods: 0 Aug 26 22:44:46.067: INFO: Node iruya-worker is running more than one daemon pod Aug 26 22:44:46.936: INFO: Number of nodes with available pods: 0 Aug 26 22:44:46.936: INFO: Node iruya-worker is running more than one daemon pod Aug 26 22:44:47.935: INFO: Number of nodes with available pods: 0 Aug 26 22:44:47.935: INFO: Node iruya-worker is running more than one daemon pod Aug 26 22:44:49.156: INFO: Number of nodes with available pods: 0 Aug 26 22:44:49.156: INFO: Node iruya-worker is running more than one daemon pod Aug 26 22:44:49.960: INFO: Number of nodes with available pods: 0 Aug 26 22:44:49.960: INFO: Node iruya-worker is running more than one daemon pod Aug 26 22:44:50.936: INFO: Number of nodes with available pods: 0 Aug 26 22:44:50.936: INFO: Node iruya-worker is running more than one daemon pod Aug 26 22:44:51.935: INFO: Number of nodes with available pods: 1 Aug 26 22:44:51.935: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7726, will wait for the garbage collector to delete the pods Aug 26 22:44:52.026: INFO: Deleting DaemonSet.extensions daemon-set took: 19.600135ms Aug 26 22:44:52.328: INFO: Terminating DaemonSet.extensions daemon-set pods took: 302.118141ms Aug 26 22:45:03.335: INFO: Number of nodes with available pods: 0 Aug 26 22:45:03.335: INFO: Number of running nodes: 0, number of available pods: 0 Aug 26 22:45:03.356: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7726/daemonsets","resourceVersion":"3034569"},"items":null} Aug 26 22:45:03.386: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7726/pods","resourceVersion":"3034569"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:45:03.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7726" for this suite. Aug 26 22:45:09.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:45:09.602: INFO: namespace daemonsets-7726 deletion completed in 6.148490278s • [SLOW TEST:34.185 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:45:09.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Aug 26 22:45:09.741: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 22:45:09.759: INFO: Number of nodes with available pods: 0 Aug 26 22:45:09.759: INFO: Node iruya-worker is running more than one daemon pod Aug 26 22:45:11.076: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 22:45:11.169: INFO: Number of nodes with available pods: 0 Aug 26 22:45:11.169: INFO: Node iruya-worker is running more than one daemon pod Aug 26 22:45:11.771: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 22:45:11.776: INFO: Number of nodes with available pods: 0 Aug 26 22:45:11.776: INFO: Node iruya-worker is running more than one daemon pod Aug 26 22:45:12.768: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 22:45:12.773: INFO: Number of nodes with available pods: 0 Aug 26 22:45:12.773: INFO: Node iruya-worker is running more than one daemon pod Aug 26 22:45:13.862: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 22:45:13.868: INFO: Number of nodes with available pods: 0 Aug 26 22:45:13.869: INFO: Node iruya-worker is running more than one daemon pod Aug 26 22:45:14.875: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 22:45:14.881: INFO: Number of nodes with available pods: 0 Aug 26 22:45:14.881: INFO: Node iruya-worker is running more than one daemon pod Aug 26 22:45:15.767: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 22:45:15.775: INFO: Number of nodes with available pods: 0 Aug 26 22:45:15.775: INFO: Node iruya-worker is running more than one daemon pod Aug 26 22:45:16.769: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 22:45:16.773: INFO: Number of nodes with available pods: 1 Aug 26 22:45:16.773: INFO: Node iruya-worker is running more than one daemon pod Aug 26 22:45:17.771: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 22:45:17.777: INFO: Number of nodes with available pods: 2 Aug 26 22:45:17.777: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Aug 26 22:45:17.809: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 22:45:17.814: INFO: Number of nodes with available pods: 1 Aug 26 22:45:17.814: INFO: Node iruya-worker2 is running more than one daemon pod Aug 26 22:45:18.850: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 22:45:18.855: INFO: Number of nodes with available pods: 1 Aug 26 22:45:18.855: INFO: Node iruya-worker2 is running more than one daemon pod Aug 26 22:45:19.827: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 22:45:19.834: INFO: Number of nodes with available pods: 1 Aug 26 22:45:19.834: INFO: Node iruya-worker2 is running more than one daemon pod Aug 26 22:45:20.880: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 22:45:20.885: INFO: Number of nodes with available pods: 1 Aug 26 22:45:20.885: INFO: Node iruya-worker2 is running more than one daemon pod Aug 26 22:45:21.889: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 22:45:22.406: INFO: Number of nodes with available pods: 1 Aug 26 22:45:22.406: INFO: Node iruya-worker2 is running more than one daemon pod Aug 26 22:45:23.070: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 22:45:23.077: INFO: Number of nodes with available pods: 1 Aug 26 22:45:23.077: INFO: Node iruya-worker2 is running more than one daemon pod Aug 26 22:45:23.962: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 22:45:23.968: INFO: Number of nodes with available pods: 1 Aug 26 22:45:23.968: INFO: Node iruya-worker2 is running more than one daemon pod Aug 26 22:45:24.824: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 22:45:24.829: INFO: Number of nodes with available pods: 1 Aug 26 22:45:24.829: INFO: Node iruya-worker2 is running more than one daemon pod Aug 26 22:45:25.960: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 22:45:26.323: INFO: Number of nodes with available pods: 1 Aug 26 22:45:26.323: INFO: Node iruya-worker2 is running more than one daemon pod Aug 26 22:45:26.848: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 22:45:26.854: INFO: Number of nodes with available pods: 1 Aug 26 22:45:26.854: INFO: Node iruya-worker2 is running more than one daemon pod Aug 26 22:45:27.824: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 22:45:27.830: INFO: Number of nodes with available pods: 2 Aug 26 22:45:27.830: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9815, will wait for the garbage collector to delete the pods Aug 26 22:45:27.899: INFO: Deleting DaemonSet.extensions daemon-set took: 9.001423ms Aug 26 22:45:28.200: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.841533ms Aug 26 22:45:43.934: INFO: Number of nodes with available pods: 0 Aug 26 22:45:43.934: INFO: Number of running nodes: 0, number of available pods: 0 Aug 26 22:45:43.938: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9815/daemonsets","resourceVersion":"3034771"},"items":null} Aug 26 22:45:43.942: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9815/pods","resourceVersion":"3034771"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:45:43.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9815" for this suite. Aug 26 22:45:52.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:45:52.162: INFO: namespace daemonsets-9815 deletion completed in 8.195678847s • [SLOW TEST:42.557 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:45:52.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Aug 26 22:45:52.311: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 22:45:52.320: INFO: Number of nodes with available pods: 0 Aug 26 22:45:52.320: INFO: Node iruya-worker is running more than one daemon pod Aug 26 22:45:53.339: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 22:45:53.352: INFO: Number of nodes with available pods: 0 Aug 26 22:45:53.353: INFO: Node iruya-worker is running more than one daemon pod Aug 26 22:45:54.563: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 22:45:54.569: INFO: Number of nodes with available pods: 0 Aug 26 22:45:54.569: INFO: Node iruya-worker is running more than one daemon pod Aug 26 22:45:55.730: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 22:45:56.038: INFO: Number of nodes with available pods: 0 Aug 26 22:45:56.038: INFO: Node iruya-worker is running more than one daemon pod Aug 26 22:45:56.418: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 22:45:56.423: INFO: Number of nodes with available pods: 1 Aug 26 22:45:56.423: INFO: Node iruya-worker2 is running more than one daemon pod Aug 26 22:45:57.330: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 22:45:57.336: INFO: Number of nodes with available pods: 1 Aug 26 22:45:57.336: INFO: Node iruya-worker2 is running more than one daemon pod Aug 26 22:45:58.335: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 22:45:58.340: INFO: Number of nodes with available pods: 2 Aug 26 22:45:58.340: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Aug 26 22:45:58.389: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 22:45:58.428: INFO: Number of nodes with available pods: 1 Aug 26 22:45:58.428: INFO: Node iruya-worker is running more than one daemon pod Aug 26 22:45:59.446: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 22:45:59.451: INFO: Number of nodes with available pods: 1 Aug 26 22:45:59.451: INFO: Node iruya-worker is running more than one daemon pod Aug 26 22:46:00.539: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 22:46:00.546: INFO: Number of nodes with available pods: 1 Aug 26 22:46:00.546: INFO: Node iruya-worker is running more than one daemon pod Aug 26 22:46:01.439: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 22:46:01.446: INFO: Number of nodes with available pods: 1 Aug 26 22:46:01.446: INFO: Node iruya-worker is running more than one daemon pod Aug 26 22:46:02.440: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 22:46:02.447: INFO: Number of nodes with available pods: 2 Aug 26 22:46:02.447: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3667, will wait for the garbage collector to delete the pods Aug 26 22:46:02.519: INFO: Deleting DaemonSet.extensions daemon-set took: 7.899953ms Aug 26 22:46:02.620: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.77934ms Aug 26 22:46:14.151: INFO: Number of nodes with available pods: 0 Aug 26 22:46:14.151: INFO: Number of running nodes: 0, number of available pods: 0 Aug 26 22:46:14.183: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3667/daemonsets","resourceVersion":"3034955"},"items":null} Aug 26 22:46:14.187: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3667/pods","resourceVersion":"3034955"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:46:14.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3667" for this suite. Aug 26 22:46:20.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:46:21.176: INFO: namespace daemonsets-3667 deletion completed in 6.963912549s • [SLOW TEST:29.013 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:46:21.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Aug 26 22:46:21.305: INFO: Waiting up to 5m0s for pod "pod-47700a2f-9cae-4850-b33e-3cd03c4b89a7" in namespace "emptydir-3923" to be "success or failure" Aug 26 22:46:21.308: INFO: Pod "pod-47700a2f-9cae-4850-b33e-3cd03c4b89a7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.339879ms Aug 26 22:46:23.546: INFO: Pod "pod-47700a2f-9cae-4850-b33e-3cd03c4b89a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.241179035s Aug 26 22:46:25.949: INFO: Pod "pod-47700a2f-9cae-4850-b33e-3cd03c4b89a7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.643808195s Aug 26 22:46:27.955: INFO: Pod "pod-47700a2f-9cae-4850-b33e-3cd03c4b89a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.649833572s STEP: Saw pod success Aug 26 22:46:27.955: INFO: Pod "pod-47700a2f-9cae-4850-b33e-3cd03c4b89a7" satisfied condition "success or failure" Aug 26 22:46:27.958: INFO: Trying to get logs from node iruya-worker2 pod pod-47700a2f-9cae-4850-b33e-3cd03c4b89a7 container test-container: STEP: delete the pod Aug 26 22:46:28.197: INFO: Waiting for pod pod-47700a2f-9cae-4850-b33e-3cd03c4b89a7 to disappear Aug 26 22:46:28.714: INFO: Pod pod-47700a2f-9cae-4850-b33e-3cd03c4b89a7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:46:28.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3923" for this suite. Aug 26 22:46:35.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:46:35.548: INFO: namespace emptydir-3923 deletion completed in 6.781970915s • [SLOW TEST:14.367 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:46:35.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-80c44a7b-30a8-4ab1-8a28-62d66c299c04 STEP: Creating a pod to test consume configMaps Aug 26 22:46:36.537: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-953cdd82-1c67-4fe1-aa13-3a697dccf0d0" in namespace "projected-4274" to be "success or failure" Aug 26 22:46:36.763: INFO: Pod "pod-projected-configmaps-953cdd82-1c67-4fe1-aa13-3a697dccf0d0": Phase="Pending", Reason="", readiness=false. Elapsed: 226.257465ms Aug 26 22:46:38.817: INFO: Pod "pod-projected-configmaps-953cdd82-1c67-4fe1-aa13-3a697dccf0d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.279644922s Aug 26 22:46:40.823: INFO: Pod "pod-projected-configmaps-953cdd82-1c67-4fe1-aa13-3a697dccf0d0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.286304135s Aug 26 22:46:42.830: INFO: Pod "pod-projected-configmaps-953cdd82-1c67-4fe1-aa13-3a697dccf0d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.292956103s STEP: Saw pod success Aug 26 22:46:42.831: INFO: Pod "pod-projected-configmaps-953cdd82-1c67-4fe1-aa13-3a697dccf0d0" satisfied condition "success or failure" Aug 26 22:46:42.846: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-953cdd82-1c67-4fe1-aa13-3a697dccf0d0 container projected-configmap-volume-test: STEP: delete the pod Aug 26 22:46:42.876: INFO: Waiting for pod pod-projected-configmaps-953cdd82-1c67-4fe1-aa13-3a697dccf0d0 to disappear Aug 26 22:46:43.273: INFO: Pod pod-projected-configmaps-953cdd82-1c67-4fe1-aa13-3a697dccf0d0 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:46:43.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4274" for this suite. Aug 26 22:46:49.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:46:49.707: INFO: namespace projected-4274 deletion completed in 6.410220918s • [SLOW TEST:14.158 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:46:49.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Aug 26 22:46:50.009: INFO: Waiting up to 5m0s for pod "downward-api-f27b1b2c-605a-406b-94b0-4acf638b35c4" in namespace "downward-api-778" to be "success or failure" Aug 26 22:46:50.332: INFO: Pod "downward-api-f27b1b2c-605a-406b-94b0-4acf638b35c4": Phase="Pending", Reason="", readiness=false. Elapsed: 323.231456ms Aug 26 22:46:52.339: INFO: Pod "downward-api-f27b1b2c-605a-406b-94b0-4acf638b35c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.330575261s Aug 26 22:46:54.346: INFO: Pod "downward-api-f27b1b2c-605a-406b-94b0-4acf638b35c4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.337648554s Aug 26 22:46:56.668: INFO: Pod "downward-api-f27b1b2c-605a-406b-94b0-4acf638b35c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.659063429s STEP: Saw pod success Aug 26 22:46:56.668: INFO: Pod "downward-api-f27b1b2c-605a-406b-94b0-4acf638b35c4" satisfied condition "success or failure" Aug 26 22:46:56.673: INFO: Trying to get logs from node iruya-worker2 pod downward-api-f27b1b2c-605a-406b-94b0-4acf638b35c4 container dapi-container: STEP: delete the pod Aug 26 22:46:56.854: INFO: Waiting for pod downward-api-f27b1b2c-605a-406b-94b0-4acf638b35c4 to disappear Aug 26 22:46:56.903: INFO: Pod downward-api-f27b1b2c-605a-406b-94b0-4acf638b35c4 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:46:56.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-778" for this suite. Aug 26 22:47:03.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:47:03.199: INFO: namespace downward-api-778 deletion completed in 6.283892452s • [SLOW TEST:13.491 seconds] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:47:03.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:47:08.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5444" for this suite. Aug 26 22:48:00.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:48:00.554: INFO: namespace kubelet-test-5444 deletion completed in 52.322576142s • [SLOW TEST:57.353 seconds] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:48:00.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 26 22:48:00.759: INFO: Creating deployment "test-recreate-deployment" Aug 26 22:48:00.767: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Aug 26 22:48:00.899: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Aug 26 22:48:02.914: INFO: Waiting deployment "test-recreate-deployment" to complete Aug 26 22:48:02.923: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734078880, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734078880, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734078881, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734078880, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 22:48:04.930: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734078880, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734078880, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734078881, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734078880, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 22:48:06.931: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Aug 26 22:48:06.947: INFO: Updating deployment test-recreate-deployment Aug 26 22:48:06.948: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Aug 26 22:48:07.185: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-4627,SelfLink:/apis/apps/v1/namespaces/deployment-4627/deployments/test-recreate-deployment,UID:237f857a-fc5b-4522-9414-e85a6b49e97f,ResourceVersion:3035511,Generation:2,CreationTimestamp:2020-08-26 22:48:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-08-26 22:48:07 +0000 UTC 2020-08-26 22:48:07 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-08-26 22:48:07 +0000 UTC 2020-08-26 22:48:00 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Aug 26 22:48:07.193: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-4627,SelfLink:/apis/apps/v1/namespaces/deployment-4627/replicasets/test-recreate-deployment-5c8c9cc69d,UID:630f7bd5-984e-4a86-b05c-94d51afb70f2,ResourceVersion:3035508,Generation:1,CreationTimestamp:2020-08-26 22:48:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 237f857a-fc5b-4522-9414-e85a6b49e97f 0x40010ac6d7 0x40010ac6d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Aug 26 22:48:07.194: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Aug 26 22:48:07.194: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-4627,SelfLink:/apis/apps/v1/namespaces/deployment-4627/replicasets/test-recreate-deployment-6df85df6b9,UID:ac8b7d18-6b0b-4e59-a2ca-2ea05b843a25,ResourceVersion:3035500,Generation:2,CreationTimestamp:2020-08-26 22:48:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 237f857a-fc5b-4522-9414-e85a6b49e97f 0x40010ac7a7 0x40010ac7a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Aug 26 22:48:07.201: INFO: Pod "test-recreate-deployment-5c8c9cc69d-lsb2j" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-lsb2j,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-4627,SelfLink:/api/v1/namespaces/deployment-4627/pods/test-recreate-deployment-5c8c9cc69d-lsb2j,UID:633fb75f-f65c-4f17-b57c-0426c6861d15,ResourceVersion:3035512,Generation:0,CreationTimestamp:2020-08-26 22:48:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 630f7bd5-984e-4a86-b05c-94d51afb70f2 0x4001698247 0x4001698248}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9t2l2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9t2l2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9t2l2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x40016982c0} {node.kubernetes.io/unreachable Exists NoExecute 0x40016982e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 22:48:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 22:48:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 22:48:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 22:48:07 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-26 22:48:07 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:48:07.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4627" for this suite. Aug 26 22:48:13.472: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:48:13.672: INFO: namespace deployment-4627 deletion completed in 6.462974109s • [SLOW TEST:13.115 seconds] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:48:13.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9689.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9689.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9689.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9689.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9689.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9689.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9689.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9689.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9689.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9689.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9689.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 87.37.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.37.87_udp@PTR;check="$$(dig +tcp +noall +answer +search 87.37.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.37.87_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9689.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9689.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9689.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9689.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9689.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9689.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9689.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9689.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9689.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9689.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9689.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 87.37.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.37.87_udp@PTR;check="$$(dig +tcp +noall +answer +search 87.37.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.37.87_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 26 22:48:21.966: INFO: Unable to read wheezy_udp@dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:21.970: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:21.974: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:21.978: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:22.006: INFO: Unable to read jessie_udp@dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:22.010: INFO: Unable to read jessie_tcp@dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:22.014: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:22.017: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:22.038: INFO: Lookups using dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d failed for: [wheezy_udp@dns-test-service.dns-9689.svc.cluster.local wheezy_tcp@dns-test-service.dns-9689.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local jessie_udp@dns-test-service.dns-9689.svc.cluster.local jessie_tcp@dns-test-service.dns-9689.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local] Aug 26 22:48:27.045: INFO: Unable to read wheezy_udp@dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:27.049: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:27.053: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:27.057: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:27.082: INFO: Unable to read jessie_udp@dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:27.085: INFO: Unable to read jessie_tcp@dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:27.089: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:27.093: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:27.114: INFO: Lookups using dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d failed for: [wheezy_udp@dns-test-service.dns-9689.svc.cluster.local wheezy_tcp@dns-test-service.dns-9689.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local jessie_udp@dns-test-service.dns-9689.svc.cluster.local jessie_tcp@dns-test-service.dns-9689.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local] Aug 26 22:48:32.044: INFO: Unable to read wheezy_udp@dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:32.048: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:32.051: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:32.054: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:32.078: INFO: Unable to read jessie_udp@dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:32.082: INFO: Unable to read jessie_tcp@dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:32.085: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:32.088: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:32.106: INFO: Lookups using dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d failed for: [wheezy_udp@dns-test-service.dns-9689.svc.cluster.local wheezy_tcp@dns-test-service.dns-9689.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local jessie_udp@dns-test-service.dns-9689.svc.cluster.local jessie_tcp@dns-test-service.dns-9689.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local] Aug 26 22:48:37.046: INFO: Unable to read wheezy_udp@dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:37.050: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:37.055: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:37.059: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:37.087: INFO: Unable to read jessie_udp@dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:37.091: INFO: Unable to read jessie_tcp@dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:37.095: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:37.099: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:37.120: INFO: Lookups using dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d failed for: [wheezy_udp@dns-test-service.dns-9689.svc.cluster.local wheezy_tcp@dns-test-service.dns-9689.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local jessie_udp@dns-test-service.dns-9689.svc.cluster.local jessie_tcp@dns-test-service.dns-9689.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local] Aug 26 22:48:42.046: INFO: Unable to read wheezy_udp@dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:42.051: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:42.055: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:42.059: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:42.089: INFO: Unable to read jessie_udp@dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:42.093: INFO: Unable to read jessie_tcp@dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:42.096: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:42.101: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:42.122: INFO: Lookups using dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d failed for: [wheezy_udp@dns-test-service.dns-9689.svc.cluster.local wheezy_tcp@dns-test-service.dns-9689.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local jessie_udp@dns-test-service.dns-9689.svc.cluster.local jessie_tcp@dns-test-service.dns-9689.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local] Aug 26 22:48:47.083: INFO: Unable to read wheezy_udp@dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:47.088: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:47.093: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:47.097: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:47.123: INFO: Unable to read jessie_udp@dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:47.127: INFO: Unable to read jessie_tcp@dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:47.131: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:47.134: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local from pod dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d: the server could not find the requested resource (get pods dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d) Aug 26 22:48:47.154: INFO: Lookups using dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d failed for: [wheezy_udp@dns-test-service.dns-9689.svc.cluster.local wheezy_tcp@dns-test-service.dns-9689.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local jessie_udp@dns-test-service.dns-9689.svc.cluster.local jessie_tcp@dns-test-service.dns-9689.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9689.svc.cluster.local] Aug 26 22:48:52.112: INFO: DNS probes using dns-9689/dns-test-105805d6-e32f-429c-b56c-bbd3ba2f116d succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:48:54.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9689" for this suite. Aug 26 22:49:02.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:49:03.107: INFO: namespace dns-9689 deletion completed in 8.253674537s • [SLOW TEST:49.431 seconds] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:49:03.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 26 22:49:03.256: INFO: Waiting up to 5m0s for pod "downwardapi-volume-024e6d61-e6dc-4ee4-8f8e-6dbc3ba4d282" in namespace "projected-2394" to be "success or failure" Aug 26 22:49:03.265: INFO: Pod "downwardapi-volume-024e6d61-e6dc-4ee4-8f8e-6dbc3ba4d282": Phase="Pending", Reason="", readiness=false. Elapsed: 9.379378ms Aug 26 22:49:05.272: INFO: Pod "downwardapi-volume-024e6d61-e6dc-4ee4-8f8e-6dbc3ba4d282": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016176227s Aug 26 22:49:07.280: INFO: Pod "downwardapi-volume-024e6d61-e6dc-4ee4-8f8e-6dbc3ba4d282": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024544456s Aug 26 22:49:09.288: INFO: Pod "downwardapi-volume-024e6d61-e6dc-4ee4-8f8e-6dbc3ba4d282": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.032034733s STEP: Saw pod success Aug 26 22:49:09.288: INFO: Pod "downwardapi-volume-024e6d61-e6dc-4ee4-8f8e-6dbc3ba4d282" satisfied condition "success or failure" Aug 26 22:49:09.301: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-024e6d61-e6dc-4ee4-8f8e-6dbc3ba4d282 container client-container: STEP: delete the pod Aug 26 22:49:09.335: INFO: Waiting for pod downwardapi-volume-024e6d61-e6dc-4ee4-8f8e-6dbc3ba4d282 to disappear Aug 26 22:49:09.350: INFO: Pod downwardapi-volume-024e6d61-e6dc-4ee4-8f8e-6dbc3ba4d282 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:49:09.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2394" for this suite. Aug 26 22:49:15.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:49:15.677: INFO: namespace projected-2394 deletion completed in 6.316215398s • [SLOW TEST:12.567 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:49:15.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-1759b2c2-83a6-4aea-a3ad-a0d3b6a6d4e4 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-1759b2c2-83a6-4aea-a3ad-a0d3b6a6d4e4 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:49:22.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9623" for this suite. Aug 26 22:49:44.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:49:44.314: INFO: namespace projected-9623 deletion completed in 22.155416062s • [SLOW TEST:28.637 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:49:44.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-67969c9b-5bab-4a5d-a01f-2dc02edb845f STEP: Creating the pod STEP: Updating configmap configmap-test-upd-67969c9b-5bab-4a5d-a01f-2dc02edb845f STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:49:50.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8210" for this suite. Aug 26 22:50:14.541: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:50:14.679: INFO: namespace configmap-8210 deletion completed in 24.158735625s • [SLOW TEST:30.360 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:50:14.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-b9e84597-8b3c-4bde-9e39-1771f159922c STEP: Creating a pod to test consume secrets Aug 26 22:50:14.832: INFO: Waiting up to 5m0s for pod "pod-secrets-a2ddee98-26d7-4adf-bc7a-13e47ab8cd01" in namespace "secrets-9373" to be "success or failure" Aug 26 22:50:14.844: INFO: Pod "pod-secrets-a2ddee98-26d7-4adf-bc7a-13e47ab8cd01": Phase="Pending", Reason="", readiness=false. Elapsed: 11.124472ms Aug 26 22:50:16.850: INFO: Pod "pod-secrets-a2ddee98-26d7-4adf-bc7a-13e47ab8cd01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018031401s Aug 26 22:50:18.856: INFO: Pod "pod-secrets-a2ddee98-26d7-4adf-bc7a-13e47ab8cd01": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023510242s Aug 26 22:50:20.862: INFO: Pod "pod-secrets-a2ddee98-26d7-4adf-bc7a-13e47ab8cd01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029776866s STEP: Saw pod success Aug 26 22:50:20.862: INFO: Pod "pod-secrets-a2ddee98-26d7-4adf-bc7a-13e47ab8cd01" satisfied condition "success or failure" Aug 26 22:50:20.890: INFO: Trying to get logs from node iruya-worker pod pod-secrets-a2ddee98-26d7-4adf-bc7a-13e47ab8cd01 container secret-volume-test: STEP: delete the pod Aug 26 22:50:20.930: INFO: Waiting for pod pod-secrets-a2ddee98-26d7-4adf-bc7a-13e47ab8cd01 to disappear Aug 26 22:50:20.950: INFO: Pod pod-secrets-a2ddee98-26d7-4adf-bc7a-13e47ab8cd01 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:50:20.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9373" for this suite. Aug 26 22:50:26.974: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:50:27.109: INFO: namespace secrets-9373 deletion completed in 6.150687422s STEP: Destroying namespace "secret-namespace-6070" for this suite. Aug 26 22:50:33.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:50:33.270: INFO: namespace secret-namespace-6070 deletion completed in 6.161072679s • [SLOW TEST:18.589 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:50:33.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-5ebf4977-e89e-47c3-9c21-022795b449d9 STEP: Creating a pod to test consume secrets Aug 26 22:50:33.409: INFO: Waiting up to 5m0s for pod "pod-secrets-6e8c40fb-d997-45e5-8d05-a87b604dec44" in namespace "secrets-5065" to be "success or failure" Aug 26 22:50:33.418: INFO: Pod "pod-secrets-6e8c40fb-d997-45e5-8d05-a87b604dec44": Phase="Pending", Reason="", readiness=false. Elapsed: 8.118494ms Aug 26 22:50:35.449: INFO: Pod "pod-secrets-6e8c40fb-d997-45e5-8d05-a87b604dec44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039202036s Aug 26 22:50:37.456: INFO: Pod "pod-secrets-6e8c40fb-d997-45e5-8d05-a87b604dec44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046344972s STEP: Saw pod success Aug 26 22:50:37.456: INFO: Pod "pod-secrets-6e8c40fb-d997-45e5-8d05-a87b604dec44" satisfied condition "success or failure" Aug 26 22:50:37.461: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-6e8c40fb-d997-45e5-8d05-a87b604dec44 container secret-volume-test: STEP: delete the pod Aug 26 22:50:37.485: INFO: Waiting for pod pod-secrets-6e8c40fb-d997-45e5-8d05-a87b604dec44 to disappear Aug 26 22:50:37.489: INFO: Pod pod-secrets-6e8c40fb-d997-45e5-8d05-a87b604dec44 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:50:37.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5065" for this suite. Aug 26 22:50:43.533: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:50:43.665: INFO: namespace secrets-5065 deletion completed in 6.166260623s • [SLOW TEST:10.392 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:50:43.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 26 22:50:47.803: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:50:47.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2939" for this suite. Aug 26 22:50:53.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:50:54.005: INFO: namespace container-runtime-2939 deletion completed in 6.154727645s • [SLOW TEST:10.337 seconds] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:50:54.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Aug 26 22:50:54.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-9692' Aug 26 22:51:01.417: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Aug 26 22:51:01.417: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Aug 26 22:51:03.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-9692' Aug 26 22:51:04.760: INFO: stderr: "" Aug 26 22:51:04.760: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:51:04.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9692" for this suite. Aug 26 22:53:06.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:53:06.930: INFO: namespace kubectl-9692 deletion completed in 2m2.155018389s • [SLOW TEST:132.924 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:53:06.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 26 22:53:07.034: INFO: Creating ReplicaSet my-hostname-basic-5d94b570-be91-4135-85b1-dcb1479c65cd Aug 26 22:53:07.055: INFO: Pod name my-hostname-basic-5d94b570-be91-4135-85b1-dcb1479c65cd: Found 0 pods out of 1 Aug 26 22:53:12.063: INFO: Pod name my-hostname-basic-5d94b570-be91-4135-85b1-dcb1479c65cd: Found 1 pods out of 1 Aug 26 22:53:12.064: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-5d94b570-be91-4135-85b1-dcb1479c65cd" is running Aug 26 22:53:12.069: INFO: Pod "my-hostname-basic-5d94b570-be91-4135-85b1-dcb1479c65cd-rx5kl" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-26 22:53:07 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-26 22:53:10 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-26 22:53:10 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-26 22:53:07 +0000 UTC Reason: Message:}]) Aug 26 22:53:12.070: INFO: Trying to dial the pod Aug 26 22:53:17.094: INFO: Controller my-hostname-basic-5d94b570-be91-4135-85b1-dcb1479c65cd: Got expected result from replica 1 [my-hostname-basic-5d94b570-be91-4135-85b1-dcb1479c65cd-rx5kl]: "my-hostname-basic-5d94b570-be91-4135-85b1-dcb1479c65cd-rx5kl", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:53:17.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6540" for this suite. Aug 26 22:53:23.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:53:23.265: INFO: namespace replicaset-6540 deletion completed in 6.160341029s • [SLOW TEST:16.329 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:53:23.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Aug 26 22:53:23.364: INFO: Waiting up to 5m0s for pod "pod-417e4e8b-3fb8-49b8-a1e2-e0a217789b3d" in namespace "emptydir-1268" to be "success or failure" Aug 26 22:53:23.380: INFO: Pod "pod-417e4e8b-3fb8-49b8-a1e2-e0a217789b3d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.431399ms Aug 26 22:53:25.385: INFO: Pod "pod-417e4e8b-3fb8-49b8-a1e2-e0a217789b3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020784382s Aug 26 22:53:27.391: INFO: Pod "pod-417e4e8b-3fb8-49b8-a1e2-e0a217789b3d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026794761s Aug 26 22:53:29.439: INFO: Pod "pod-417e4e8b-3fb8-49b8-a1e2-e0a217789b3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.074873241s STEP: Saw pod success Aug 26 22:53:29.439: INFO: Pod "pod-417e4e8b-3fb8-49b8-a1e2-e0a217789b3d" satisfied condition "success or failure" Aug 26 22:53:29.445: INFO: Trying to get logs from node iruya-worker2 pod pod-417e4e8b-3fb8-49b8-a1e2-e0a217789b3d container test-container: STEP: delete the pod Aug 26 22:53:29.494: INFO: Waiting for pod pod-417e4e8b-3fb8-49b8-a1e2-e0a217789b3d to disappear Aug 26 22:53:29.576: INFO: Pod pod-417e4e8b-3fb8-49b8-a1e2-e0a217789b3d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:53:29.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1268" for this suite. Aug 26 22:53:35.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 22:53:35.745: INFO: namespace emptydir-1268 deletion completed in 6.163041668s • [SLOW TEST:12.477 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 22:53:35.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-174 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-174 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-174 Aug 26 22:53:35.840: INFO: Found 0 stateful pods, waiting for 1 Aug 26 22:53:45.850: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Aug 26 22:53:45.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 26 22:53:47.456: INFO: stderr: "I0826 22:53:47.275846 227 log.go:172] (0x4000868630) (0x4000926820) Create stream\nI0826 22:53:47.282991 227 log.go:172] (0x4000868630) (0x4000926820) Stream added, broadcasting: 1\nI0826 22:53:47.296835 227 log.go:172] (0x4000868630) Reply frame received for 1\nI0826 22:53:47.297555 227 log.go:172] (0x4000868630) (0x400020da40) Create stream\nI0826 22:53:47.297632 227 log.go:172] (0x4000868630) (0x400020da40) Stream added, broadcasting: 3\nI0826 22:53:47.299120 227 log.go:172] (0x4000868630) Reply frame received for 3\nI0826 22:53:47.299376 227 log.go:172] (0x4000868630) (0x4000926000) Create stream\nI0826 22:53:47.299434 227 log.go:172] (0x4000868630) (0x4000926000) Stream added, broadcasting: 5\nI0826 22:53:47.300985 227 log.go:172] (0x4000868630) Reply frame received for 5\nI0826 22:53:47.363521 227 log.go:172] (0x4000868630) Data frame received for 5\nI0826 22:53:47.363788 227 log.go:172] (0x4000926000) (5) Data frame handling\nI0826 22:53:47.364321 227 log.go:172] (0x4000926000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0826 22:53:47.433028 227 log.go:172] (0x4000868630) Data frame received for 3\nI0826 22:53:47.433222 227 log.go:172] (0x400020da40) (3) Data frame handling\nI0826 22:53:47.433409 227 log.go:172] (0x4000868630) Data frame received for 5\nI0826 22:53:47.433620 227 log.go:172] (0x4000926000) (5) Data frame handling\nI0826 22:53:47.433722 227 log.go:172] (0x400020da40) (3) Data frame sent\nI0826 22:53:47.433842 227 log.go:172] (0x4000868630) Data frame received for 3\nI0826 22:53:47.433937 227 log.go:172] (0x400020da40) (3) Data frame handling\nI0826 22:53:47.435176 227 log.go:172] (0x4000868630) Data frame received for 1\nI0826 22:53:47.435336 227 log.go:172] (0x4000926820) (1) Data frame handling\nI0826 22:53:47.435449 227 log.go:172] (0x4000926820) (1) Data frame sent\nI0826 22:53:47.436512 227 log.go:172] (0x4000868630) (0x4000926820) Stream removed, broadcasting: 1\nI0826 22:53:47.439434 227 log.go:172] (0x4000868630) Go away received\nI0826 22:53:47.443499 227 log.go:172] (0x4000868630) (0x4000926820) Stream removed, broadcasting: 1\nI0826 22:53:47.443983 227 log.go:172] (0x4000868630) (0x400020da40) Stream removed, broadcasting: 3\nI0826 22:53:47.444389 227 log.go:172] (0x4000868630) (0x4000926000) Stream removed, broadcasting: 5\n" Aug 26 22:53:47.457: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 26 22:53:47.457: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 26 22:53:47.463: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Aug 26 22:53:57.471: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 26 22:53:57.472: INFO: Waiting for statefulset status.replicas updated to 0 Aug 26 22:53:57.525: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.99990971s Aug 26 22:53:58.533: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.959776681s Aug 26 22:53:59.651: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.952133542s Aug 26 22:54:00.691: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.834745446s Aug 26 22:54:01.700: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.794065378s Aug 26 22:54:02.708: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.785912665s Aug 26 22:54:03.716: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.777262197s Aug 26 22:54:04.724: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.769367684s Aug 26 22:54:05.731: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.76140855s Aug 26 22:54:06.738: INFO: Verifying statefulset ss doesn't scale past 1 for another 754.651395ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-174 Aug 26 22:54:07.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 26 22:54:09.273: INFO: stderr: "I0826 22:54:09.144401 250 log.go:172] (0x400076a6e0) (0x40009c8820) Create stream\nI0826 22:54:09.147884 250 log.go:172] (0x400076a6e0) (0x40009c8820) Stream added, broadcasting: 1\nI0826 22:54:09.164882 250 log.go:172] (0x400076a6e0) Reply frame received for 1\nI0826 22:54:09.165523 250 log.go:172] (0x400076a6e0) (0x40009b8000) Create stream\nI0826 22:54:09.165602 250 log.go:172] (0x400076a6e0) (0x40009b8000) Stream added, broadcasting: 3\nI0826 22:54:09.167299 250 log.go:172] (0x400076a6e0) Reply frame received for 3\nI0826 22:54:09.167843 250 log.go:172] (0x400076a6e0) (0x40009c8000) Create stream\nI0826 22:54:09.167971 250 log.go:172] (0x400076a6e0) (0x40009c8000) Stream added, broadcasting: 5\nI0826 22:54:09.169378 250 log.go:172] (0x400076a6e0) Reply frame received for 5\nI0826 22:54:09.256392 250 log.go:172] (0x400076a6e0) Data frame received for 3\nI0826 22:54:09.256824 250 log.go:172] (0x400076a6e0) Data frame received for 5\nI0826 22:54:09.257151 250 log.go:172] (0x40009c8000) (5) Data frame handling\nI0826 22:54:09.257373 250 log.go:172] (0x40009b8000) (3) Data frame handling\nI0826 22:54:09.257615 250 log.go:172] (0x400076a6e0) Data frame received for 1\nI0826 22:54:09.257689 250 log.go:172] (0x40009c8820) (1) Data frame handling\nI0826 22:54:09.258249 250 log.go:172] (0x40009c8000) (5) Data frame sent\nI0826 22:54:09.258897 250 log.go:172] (0x40009b8000) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0826 22:54:09.259052 250 log.go:172] (0x400076a6e0) Data frame received for 3\nI0826 22:54:09.259102 250 log.go:172] (0x40009b8000) (3) Data frame handling\nI0826 22:54:09.259449 250 log.go:172] (0x400076a6e0) Data frame received for 5\nI0826 22:54:09.259513 250 log.go:172] (0x40009c8000) (5) Data frame handling\nI0826 22:54:09.259583 250 log.go:172] (0x40009c8820) (1) Data frame sent\nI0826 22:54:09.260467 250 log.go:172] (0x400076a6e0) (0x40009c8820) Stream removed, broadcasting: 1\nI0826 22:54:09.261036 250 log.go:172] (0x400076a6e0) Go away received\nI0826 22:54:09.263128 250 log.go:172] (0x400076a6e0) (0x40009c8820) Stream removed, broadcasting: 1\nI0826 22:54:09.263299 250 log.go:172] (0x400076a6e0) (0x40009b8000) Stream removed, broadcasting: 3\nI0826 22:54:09.263550 250 log.go:172] (0x400076a6e0) (0x40009c8000) Stream removed, broadcasting: 5\n" Aug 26 22:54:09.274: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 26 22:54:09.274: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 26 22:54:09.280: INFO: Found 1 stateful pods, waiting for 3 Aug 26 22:54:19.291: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Aug 26 22:54:19.291: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Aug 26 22:54:19.291: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Aug 26 22:54:19.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 26 22:54:20.907: INFO: stderr: "I0826 22:54:20.788171 273 log.go:172] (0x40008be420) (0x40004006e0) Create stream\nI0826 22:54:20.791171 273 log.go:172] (0x40008be420) (0x40004006e0) Stream added, broadcasting: 1\nI0826 22:54:20.802038 273 log.go:172] (0x40008be420) Reply frame received for 1\nI0826 22:54:20.802582 273 log.go:172] (0x40008be420) (0x4000400780) Create stream\nI0826 22:54:20.802650 273 log.go:172] (0x40008be420) (0x4000400780) Stream added, broadcasting: 3\nI0826 22:54:20.804257 273 log.go:172] (0x40008be420) Reply frame received for 3\nI0826 22:54:20.804503 273 log.go:172] (0x40008be420) (0x40003ba140) Create stream\nI0826 22:54:20.804576 273 log.go:172] (0x40008be420) (0x40003ba140) Stream added, broadcasting: 5\nI0826 22:54:20.806122 273 log.go:172] (0x40008be420) Reply frame received for 5\nI0826 22:54:20.883143 273 log.go:172] (0x40008be420) Data frame received for 5\nI0826 22:54:20.883521 273 log.go:172] (0x40008be420) Data frame received for 3\nI0826 22:54:20.883915 273 log.go:172] (0x40008be420) Data frame received for 1\nI0826 22:54:20.884037 273 log.go:172] (0x4000400780) (3) Data frame handling\nI0826 22:54:20.884266 273 log.go:172] (0x40004006e0) (1) Data frame handling\nI0826 22:54:20.884668 273 log.go:172] (0x40003ba140) (5) Data frame handling\nI0826 22:54:20.886800 273 log.go:172] (0x40004006e0) (1) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0826 22:54:20.887217 273 log.go:172] (0x40003ba140) (5) Data frame sent\nI0826 22:54:20.887626 273 log.go:172] (0x4000400780) (3) Data frame sent\nI0826 22:54:20.887873 273 log.go:172] (0x40008be420) Data frame received for 3\nI0826 22:54:20.888055 273 log.go:172] (0x40008be420) Data frame received for 5\nI0826 22:54:20.889498 273 log.go:172] (0x40008be420) (0x40004006e0) Stream removed, broadcasting: 1\nI0826 22:54:20.890825 273 log.go:172] (0x40003ba140) (5) Data frame handling\nI0826 22:54:20.890966 273 log.go:172] (0x4000400780) (3) Data frame handling\nI0826 22:54:20.891905 273 log.go:172] (0x40008be420) Go away received\nI0826 22:54:20.895267 273 log.go:172] (0x40008be420) (0x40004006e0) Stream removed, broadcasting: 1\nI0826 22:54:20.895534 273 log.go:172] (0x40008be420) (0x4000400780) Stream removed, broadcasting: 3\nI0826 22:54:20.895725 273 log.go:172] (0x40008be420) (0x40003ba140) Stream removed, broadcasting: 5\n" Aug 26 22:54:20.908: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 26 22:54:20.908: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 26 22:54:20.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 26 22:54:23.009: INFO: stderr: "I0826 22:54:22.605972 295 log.go:172] (0x4000aba580) (0x400062a6e0) Create stream\nI0826 22:54:22.611446 295 log.go:172] (0x4000aba580) (0x400062a6e0) Stream added, broadcasting: 1\nI0826 22:54:22.630647 295 log.go:172] (0x4000aba580) Reply frame received for 1\nI0826 22:54:22.631657 295 log.go:172] (0x4000aba580) (0x400063a280) Create stream\nI0826 22:54:22.631774 295 log.go:172] (0x4000aba580) (0x400063a280) Stream added, broadcasting: 3\nI0826 22:54:22.633862 295 log.go:172] (0x4000aba580) Reply frame received for 3\nI0826 22:54:22.634152 295 log.go:172] (0x4000aba580) (0x400063a320) Create stream\nI0826 22:54:22.634214 295 log.go:172] (0x4000aba580) (0x400063a320) Stream added, broadcasting: 5\nI0826 22:54:22.635495 295 log.go:172] (0x4000aba580) Reply frame received for 5\nI0826 22:54:22.692248 295 log.go:172] (0x4000aba580) Data frame received for 5\nI0826 22:54:22.692520 295 log.go:172] (0x400063a320) (5) Data frame handling\nI0826 22:54:22.693096 295 log.go:172] (0x400063a320) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0826 22:54:22.982984 295 log.go:172] (0x4000aba580) Data frame received for 3\nI0826 22:54:22.983249 295 log.go:172] (0x400063a280) (3) Data frame handling\nI0826 22:54:22.983385 295 log.go:172] (0x4000aba580) Data frame received for 5\nI0826 22:54:22.983568 295 log.go:172] (0x400063a320) (5) Data frame handling\nI0826 22:54:22.983781 295 log.go:172] (0x400063a280) (3) Data frame sent\nI0826 22:54:22.984068 295 log.go:172] (0x4000aba580) Data frame received for 3\nI0826 22:54:22.984225 295 log.go:172] (0x400063a280) (3) Data frame handling\nI0826 22:54:22.984623 295 log.go:172] (0x4000aba580) Data frame received for 1\nI0826 22:54:22.984917 295 log.go:172] (0x400062a6e0) (1) Data frame handling\nI0826 22:54:22.985069 295 log.go:172] (0x400062a6e0) (1) Data frame sent\nI0826 22:54:22.987022 295 log.go:172] (0x4000aba580) (0x400062a6e0) Stream removed, broadcasting: 1\nI0826 22:54:22.990317 295 log.go:172] (0x4000aba580) Go away received\nI0826 22:54:22.993687 295 log.go:172] (0x4000aba580) (0x400062a6e0) Stream removed, broadcasting: 1\nI0826 22:54:22.994022 295 log.go:172] (0x4000aba580) (0x400063a280) Stream removed, broadcasting: 3\nI0826 22:54:22.994357 295 log.go:172] (0x4000aba580) (0x400063a320) Stream removed, broadcasting: 5\n" Aug 26 22:54:23.009: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 26 22:54:23.009: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 26 22:54:23.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 26 22:54:24.537: INFO: stderr: "I0826 22:54:24.395348 320 log.go:172] (0x4000142fd0) (0x4000622be0) Create stream\nI0826 22:54:24.399159 320 log.go:172] (0x4000142fd0) (0x4000622be0) Stream added, broadcasting: 1\nI0826 22:54:24.414872 320 log.go:172] (0x4000142fd0) Reply frame received for 1\nI0826 22:54:24.416022 320 log.go:172] (0x4000142fd0) (0x40006c8000) Create stream\nI0826 22:54:24.416145 320 log.go:172] (0x4000142fd0) (0x40006c8000) Stream added, broadcasting: 3\nI0826 22:54:24.418158 320 log.go:172] (0x4000142fd0) Reply frame received for 3\nI0826 22:54:24.418504 320 log.go:172] (0x4000142fd0) (0x4000622c80) Create stream\nI0826 22:54:24.418582 320 log.go:172] (0x4000142fd0) (0x4000622c80) Stream added, broadcasting: 5\nI0826 22:54:24.419838 320 log.go:172] (0x4000142fd0) Reply frame received for 5\nI0826 22:54:24.486159 320 log.go:172] (0x4000142fd0) Data frame received for 5\nI0826 22:54:24.486523 320 log.go:172] (0x4000622c80) (5) Data frame handling\nI0826 22:54:24.487355 320 log.go:172] (0x4000622c80) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0826 22:54:24.513703 320 log.go:172] (0x4000142fd0) Data frame received for 5\nI0826 22:54:24.513888 320 log.go:172] (0x4000622c80) (5) Data frame handling\nI0826 22:54:24.514083 320 log.go:172] (0x4000142fd0) Data frame received for 3\nI0826 22:54:24.514251 320 log.go:172] (0x40006c8000) (3) Data frame handling\nI0826 22:54:24.514370 320 log.go:172] (0x40006c8000) (3) Data frame sent\nI0826 22:54:24.514462 320 log.go:172] (0x4000142fd0) Data frame received for 3\nI0826 22:54:24.514560 320 log.go:172] (0x40006c8000) (3) Data frame handling\nI0826 22:54:24.516288 320 log.go:172] (0x4000142fd0) Data frame received for 1\nI0826 22:54:24.516472 320 log.go:172] (0x4000622be0) (1) Data frame handling\nI0826 22:54:24.516624 320 log.go:172] (0x4000622be0) (1) Data frame sent\nI0826 22:54:24.517611 320 log.go:172] (0x4000142fd0) (0x4000622be0) Stream removed, broadcasting: 1\nI0826 22:54:24.522800 320 log.go:172] (0x4000142fd0) Go away received\nI0826 22:54:24.525263 320 log.go:172] (0x4000142fd0) (0x4000622be0) Stream removed, broadcasting: 1\nI0826 22:54:24.525884 320 log.go:172] (0x4000142fd0) (0x40006c8000) Stream removed, broadcasting: 3\nI0826 22:54:24.526246 320 log.go:172] (0x4000142fd0) (0x4000622c80) Stream removed, broadcasting: 5\n" Aug 26 22:54:24.538: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 26 22:54:24.538: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 26 22:54:24.538: INFO: Waiting for statefulset status.replicas updated to 0 Aug 26 22:54:24.543: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Aug 26 22:54:34.557: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 26 22:54:34.558: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Aug 26 22:54:34.558: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Aug 26 22:54:34.824: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999997221s Aug 26 22:54:35.833: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.741980474s Aug 26 22:54:36.851: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.733594873s Aug 26 22:54:37.860: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.715264612s Aug 26 22:54:38.871: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.706280417s Aug 26 22:54:39.879: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.695373473s Aug 26 22:54:40.887: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.687045672s Aug 26 22:54:41.895: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.678642155s Aug 26 22:54:42.906: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.670749341s Aug 26 22:54:43.915: INFO: Verifying statefulset ss doesn't scale past 3 for another 659.658143ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-174 Aug 26 22:54:44.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 26 22:54:46.430: INFO: stderr: "I0826 22:54:46.315960 342 log.go:172] (0x400083a210) (0x4000a3e640) Create stream\nI0826 22:54:46.319519 342 log.go:172] (0x400083a210) (0x4000a3e640) Stream added, broadcasting: 1\nI0826 22:54:46.332175 342 log.go:172] (0x400083a210) Reply frame received for 1\nI0826 22:54:46.332928 342 log.go:172] (0x400083a210) (0x4000a3e6e0) Create stream\nI0826 22:54:46.332996 342 log.go:172] (0x400083a210) (0x4000a3e6e0) Stream added, broadcasting: 3\nI0826 22:54:46.334918 342 log.go:172] (0x400083a210) Reply frame received for 3\nI0826 22:54:46.335192 342 log.go:172] (0x400083a210) (0x4000a78000) Create stream\nI0826 22:54:46.335254 342 log.go:172] (0x400083a210) (0x4000a78000) Stream added, broadcasting: 5\nI0826 22:54:46.337126 342 log.go:172] (0x400083a210) Reply frame received for 5\nI0826 22:54:46.411327 342 log.go:172] (0x400083a210) Data frame received for 3\nI0826 22:54:46.411599 342 log.go:172] (0x400083a210) Data frame received for 5\nI0826 22:54:46.411674 342 log.go:172] (0x4000a78000) (5) Data frame handling\nI0826 22:54:46.411785 342 log.go:172] (0x400083a210) Data frame received for 1\nI0826 22:54:46.411903 342 log.go:172] (0x4000a3e640) (1) Data frame handling\nI0826 22:54:46.412115 342 log.go:172] (0x4000a3e6e0) (3) Data frame handling\nI0826 22:54:46.412820 342 log.go:172] (0x4000a3e6e0) (3) Data frame sent\nI0826 22:54:46.413081 342 log.go:172] (0x4000a3e640) (1) Data frame sent\nI0826 22:54:46.413352 342 log.go:172] (0x400083a210) Data frame received for 3\nI0826 22:54:46.413455 342 log.go:172] (0x4000a3e6e0) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0826 22:54:46.414129 342 log.go:172] (0x4000a78000) (5) Data frame sent\nI0826 22:54:46.414214 342 log.go:172] (0x400083a210) Data frame received for 5\nI0826 22:54:46.414595 342 log.go:172] (0x400083a210) (0x4000a3e640) Stream removed, broadcasting: 1\nI0826 22:54:46.414906 342 log.go:172] (0x4000a78000) (5) Data frame handling\nI0826 22:54:46.417073 342 log.go:172] (0x400083a210) Go away received\nI0826 22:54:46.419606 342 log.go:172] (0x400083a210) (0x4000a3e640) Stream removed, broadcasting: 1\nI0826 22:54:46.419842 342 log.go:172] (0x400083a210) (0x4000a3e6e0) Stream removed, broadcasting: 3\nI0826 22:54:46.420024 342 log.go:172] (0x400083a210) (0x4000a78000) Stream removed, broadcasting: 5\n" Aug 26 22:54:46.431: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 26 22:54:46.431: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 26 22:54:46.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 26 22:54:47.948: INFO: stderr: "I0826 22:54:47.804199 365 log.go:172] (0x4000128fd0) (0x40005246e0) Create stream\nI0826 22:54:47.809714 365 log.go:172] (0x4000128fd0) (0x40005246e0) Stream added, broadcasting: 1\nI0826 22:54:47.827463 365 log.go:172] (0x4000128fd0) Reply frame received for 1\nI0826 22:54:47.828569 365 log.go:172] (0x4000128fd0) (0x400083a000) Create stream\nI0826 22:54:47.828675 365 log.go:172] (0x4000128fd0) (0x400083a000) Stream added, broadcasting: 3\nI0826 22:54:47.830432 365 log.go:172] (0x4000128fd0) Reply frame received for 3\nI0826 22:54:47.830747 365 log.go:172] (0x4000128fd0) (0x400042a140) Create stream\nI0826 22:54:47.830808 365 log.go:172] (0x4000128fd0) (0x400042a140) Stream added, broadcasting: 5\nI0826 22:54:47.831958 365 log.go:172] (0x4000128fd0) Reply frame received for 5\nI0826 22:54:47.915055 365 log.go:172] (0x4000128fd0) Data frame received for 5\nI0826 22:54:47.915302 365 log.go:172] (0x4000128fd0) Data frame received for 1\nI0826 22:54:47.915576 365 log.go:172] (0x4000128fd0) Data frame received for 3\nI0826 22:54:47.915752 365 log.go:172] (0x40005246e0) (1) Data frame handling\nI0826 22:54:47.915875 365 log.go:172] (0x400042a140) (5) Data frame handling\nI0826 22:54:47.917253 365 log.go:172] (0x400083a000) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0826 22:54:47.918041 365 log.go:172] (0x40005246e0) (1) Data frame sent\nI0826 22:54:47.918290 365 log.go:172] (0x400083a000) (3) Data frame sent\nI0826 22:54:47.918568 365 log.go:172] (0x400042a140) (5) Data frame sent\nI0826 22:54:47.919011 365 log.go:172] (0x4000128fd0) Data frame received for 3\nI0826 22:54:47.919171 365 log.go:172] (0x400083a000) (3) Data frame handling\nI0826 22:54:47.919459 365 log.go:172] (0x4000128fd0) Data frame received for 5\nI0826 22:54:47.919689 365 log.go:172] (0x4000128fd0) (0x40005246e0) Stream removed, broadcasting: 1\nI0826 22:54:47.920853 365 log.go:172] (0x400042a140) (5) Data frame handling\nI0826 22:54:47.923286 365 log.go:172] (0x4000128fd0) Go away received\nI0826 22:54:47.936006 365 log.go:172] (0x4000128fd0) (0x40005246e0) Stream removed, broadcasting: 1\nI0826 22:54:47.936282 365 log.go:172] (0x4000128fd0) (0x400083a000) Stream removed, broadcasting: 3\nI0826 22:54:47.936699 365 log.go:172] (0x4000128fd0) (0x400042a140) Stream removed, broadcasting: 5\n" Aug 26 22:54:47.949: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 26 22:54:47.949: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 26 22:54:47.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 26 22:54:49.314: INFO: rc: 1 Aug 26 22:54:49.316: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0x4001e0e1e0 exit status 1 true [0x4000568ce0 0x4000568df8 0x4000568ed0] [0x4000568ce0 0x4000568df8 0x4000568ed0] [0x4000568db8 0x4000568e70] [0xad5158 0xad5158] 0x40035ea2a0 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Aug 26 22:54:59.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 26 22:55:00.772: INFO: rc: 1 Aug 26 22:55:00.773: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x4001e0e2a0 exit status 1 true [0x4000568ee8 0x4000569090 0x40005692a0] [0x4000568ee8 0x4000569090 0x40005692a0] [0x4000569030 0x4000569250] [0xad5158 0xad5158] 0x40035ea660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 26 22:55:10.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 26 22:55:12.044: INFO: rc: 1 Aug 26 22:55:12.044: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x40022b40c0 exit status 1 true [0x4001dbe228 0x4001dbe5c8 0x4001dbe7b8] [0x4001dbe228 0x4001dbe5c8 0x4001dbe7b8] [0x4001dbe548 0x4001dbe788] [0xad5158 0xad5158] 0x4002087020 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 26 22:55:22.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 26 22:55:23.308: INFO: rc: 1 Aug 26 22:55:23.308: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x4001da2090 exit status 1 true [0x4000010398 0x40000105e0 0x4000010a68] [0x4000010398 0x40000105e0 0x4000010a68] [0x40000105c0 0x40000107d8] [0xad5158 0xad5158] 0x400342e2a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 26 22:55:33.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 26 22:55:34.645: INFO: rc: 1 Aug 26 22:55:34.645: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x40022b41b0 exit status 1 true [0x4001dbe810 0x4001dbe8e8 0x4001dbe998] [0x4001dbe810 0x4001dbe8e8 0x4001dbe998] [0x4001dbe8d8 0x4001dbe940] [0xad5158 0xad5158] 0x4002087380 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 26 22:55:44.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 26 22:55:45.959: INFO: rc: 1 Aug 26 22:55:45.959: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x40022b4270 exit status 1 true [0x4001dbead8 0x4001dbec50 0x4001dbed78] [0x4001dbead8 0x4001dbec50 0x4001dbed78] [0x4001dbec40 0x4001dbed40] [0xad5158 0xad5158] 0x40020876e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 26 22:55:55.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 26 22:55:57.548: INFO: rc: 1 Aug 26 22:55:57.549: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x40031a0120 exit status 1 true [0x40001aa000 0x4002b3c010 0x4002b3c028] [0x40001aa000 0x4002b3c010 0x4002b3c028] [0x4002b3c008 0x4002b3c020] [0xad5158 0xad5158] 0x4000c00300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 26 22:56:07.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 26 22:56:08.796: INFO: rc: 1 Aug 26 22:56:08.797: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x40022b4360 exit status 1 true [0x4001dbedd0 0x4001dbeeb0 0x4001dbefe8] [0x4001dbedd0 0x4001dbeeb0 0x4001dbefe8] [0x4001dbee90 0x4001dbef78] [0xad5158 0xad5158] 0x4002087aa0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 26 22:56:18.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 26 22:56:20.043: INFO: rc: 1 Aug 26 22:56:20.043: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x40031a0240 exit status 1 true [0x4002b3c030 0x4002b3c078 0x4002b3c0d0] [0x4002b3c030 0x4002b3c078 0x4002b3c0d0] [0x4002b3c060 0x4002b3c0b0] [0xad5158 0xad5158] 0x4000c00a20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 26 22:56:30.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 26 22:56:31.316: INFO: rc: 1 Aug 26 22:56:31.316: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x40031a0300 exit status 1 true [0x4002b3c0f0 0x4002b3c130 0x4002b3c158] [0x4002b3c0f0 0x4002b3c130 0x4002b3c158] [0x4002b3c128 0x4002b3c140] [0xad5158 0xad5158] 0x4000c00ea0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 26 22:56:41.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 26 22:56:42.547: INFO: rc: 1 Aug 26 22:56:42.547: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x4001da21b0 exit status 1 true [0x4000010b48 0x4000010e30 0x4000010fd0] [0x4000010b48 0x4000010e30 0x4000010fd0] [0x4000010d28 0x4000010f58] [0xad5158 0xad5158] 0x400342e720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 26 22:56:52.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 26 22:56:53.834: INFO: rc: 1 Aug 26 22:56:53.835: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x4001e0e120 exit status 1 true [0x40001aa490 0x4000568d60 0x4000568e10] [0x40001aa490 0x4000568d60 0x4000568e10] [0x4000568ce0 0x4000568df8] [0xad5158 0xad5158] 0x40035ea2a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 26 22:57:03.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 26 22:57:05.087: INFO: rc: 1 Aug 26 22:57:05.088: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x40022b4090 exit status 1 true [0x4001dbe228 0x4001dbe5c8 0x4001dbe7b8] [0x4001dbe228 0x4001dbe5c8 0x4001dbe7b8] [0x4001dbe548 0x4001dbe788] [0xad5158 0xad5158] 0x4002087020 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 26 22:57:15.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 26 22:57:16.352: INFO: rc: 1 Aug 26 22:57:16.352: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x40022b41e0 exit status 1 true [0x4001dbe810 0x4001dbe8e8 0x4001dbe998] [0x4001dbe810 0x4001dbe8e8 0x4001dbe998] [0x4001dbe8d8 0x4001dbe940] [0xad5158 0xad5158] 0x4002087380 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 26 22:57:26.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 26 22:57:27.649: INFO: rc: 1 Aug 26 22:57:27.649: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x4001e0e2d0 exit status 1 true [0x4000568e70 0x4000568f98 0x40005691b8] [0x4000568e70 0x4000568f98 0x40005691b8] [0x4000568ee8 0x4000569090] [0xad5158 0xad5158] 0x40035ea660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 26 22:57:37.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 26 22:57:39.039: INFO: rc: 1 Aug 26 22:57:39.040: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x4001e0e390 exit status 1 true [0x4000569250 0x4000569498 0x4000569560] [0x4000569250 0x4000569498 0x4000569560] [0x4000569398 0x4000569510] [0xad5158 0xad5158] 0x40035ea9c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 26 22:57:49.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 26 22:57:50.303: INFO: rc: 1 Aug 26 22:57:50.303: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x4001da20f0 exit status 1 true [0x4002b3c008 0x4002b3c020 0x4002b3c040] [0x4002b3c008 0x4002b3c020 0x4002b3c040] [0x4002b3c018 0x4002b3c030] [0xad5158 0xad5158] 0x4000c00300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 26 22:58:00.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 26 22:58:01.760: INFO: rc: 1 Aug 26 22:58:01.760: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x40022b42d0 exit status 1 true [0x4001dbead8 0x4001dbec50 0x4001dbed78] [0x4001dbead8 0x4001dbec50 0x4001dbed78] [0x4001dbec40 0x4001dbed40] [0xad5158 0xad5158] 0x40020876e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 26 22:58:11.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 26 22:58:13.000: INFO: rc: 1 Aug 26 22:58:13.001: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x40031a0150 exit status 1 true [0x4000010398 0x40000105e0 0x4000010a68] [0x4000010398 0x40000105e0 0x4000010a68] [0x40000105c0 0x40000107d8] [0xad5158 0xad5158] 0x400342e2a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 26 22:58:23.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 26 22:58:24.289: INFO: rc: 1 Aug 26 22:58:24.289: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x40031a0270 exit status 1 true [0x4000010b48 0x4000010e30 0x4000010fd0] [0x4000010b48 0x4000010e30 0x4000010fd0] [0x4000010d28 0x4000010f58] [0xad5158 0xad5158] 0x400342e720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 26 22:58:34.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 26 22:58:35.528: INFO: rc: 1 Aug 26 22:58:35.528: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x4001e0e480 exit status 1 true [0x40005695d0 0x4000569830 0x4000569938] [0x40005695d0 0x4000569830 0x4000569938] [0x4000569700 0x40005698e8] [0xad5158 0xad5158] 0x40035eaea0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 26 22:58:45.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 26 22:58:46.815: INFO: rc: 1 Aug 26 22:58:46.815: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x4001da2270 exit status 1 true [0x4002b3c060 0x4002b3c0b0 0x4002b3c110] [0x4002b3c060 0x4002b3c0b0 0x4002b3c110] [0x4002b3c098 0x4002b3c0f0] [0xad5158 0xad5158] 0x4000c00a20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 26 22:58:56.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 26 22:58:58.078: INFO: rc: 1 Aug 26 22:58:58.078: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x4001e0e150 exit status 1 true [0x40001aa490 0x4000568d60 0x4000568e10] [0x40001aa490 0x4000568d60 0x4000568e10] [0x4000568ce0 0x4000568df8] [0xad5158 0xad5158] 0x40035ea2a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 26 22:59:08.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 26 22:59:09.334: INFO: rc: 1 Aug 26 22:59:09.334: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x4001e0e270 exit status 1 true [0x4000568e70 0x4000568f98 0x40005691b8] [0x4000568e70 0x4000568f98 0x40005691b8] [0x4000568ee8 0x4000569090] [0xad5158 0xad5158] 0x40035ea660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 26 22:59:19.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 26 22:59:20.594: INFO: rc: 1 Aug 26 22:59:20.594: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x40022b40c0 exit status 1 true [0x4001dbe228 0x4001dbe5c8 0x4001dbe7b8] [0x4001dbe228 0x4001dbe5c8 0x4001dbe7b8] [0x4001dbe548 0x4001dbe788] [0xad5158 0xad5158] 0x4002087020 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 26 22:59:30.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 26 22:59:31.855: INFO: rc: 1 Aug 26 22:59:31.855: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x40022b4180 exit status 1 true [0x4001dbe810 0x4001dbe8e8 0x4001dbe998] [0x4001dbe810 0x4001dbe8e8 0x4001dbe998] [0x4001dbe8d8 0x4001dbe940] [0xad5158 0xad5158] 0x4002087380 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 26 22:59:41.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 26 22:59:43.115: INFO: rc: 1 Aug 26 22:59:43.115: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x4001e0e3f0 exit status 1 true [0x4000569250 0x4000569498 0x4000569560] [0x4000569250 0x4000569498 0x4000569560] [0x4000569398 0x4000569510] [0xad5158 0xad5158] 0x40035ea9c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 26 22:59:53.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 26 22:59:54.396: INFO: rc: 1 Aug 26 22:59:54.397: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: Aug 26 22:59:54.397: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Aug 26 22:59:54.421: INFO: Deleting all statefulset in ns statefulset-174 Aug 26 22:59:54.424: INFO: Scaling statefulset ss to 0 Aug 26 22:59:54.434: INFO: Waiting for statefulset status.replicas updated to 0 Aug 26 22:59:54.436: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 22:59:54.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-174" for this suite. Aug 26 23:00:00.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:00:00.620: INFO: namespace statefulset-174 deletion completed in 6.156279102s • [SLOW TEST:384.873 seconds] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:00:00.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Aug 26 23:00:00.724: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Aug 26 23:00:05.092: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Aug 26 23:00:07.387: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734079605, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734079605, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734079605, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734079605, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 23:00:09.395: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734079605, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734079605, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734079605, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734079605, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 23:00:12.151: INFO: Waited 730.377222ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:00:12.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-2991" for this suite. Aug 26 23:00:19.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:00:19.335: INFO: namespace aggregator-2991 deletion completed in 6.487767511s • [SLOW TEST:18.713 seconds] [sig-api-machinery] Aggregator /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:00:19.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 26 23:00:19.425: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ea4449ff-ff66-45d1-a02f-30ab01442de2" in namespace "projected-8194" to be "success or failure" Aug 26 23:00:19.445: INFO: Pod "downwardapi-volume-ea4449ff-ff66-45d1-a02f-30ab01442de2": Phase="Pending", Reason="", readiness=false. Elapsed: 19.563099ms Aug 26 23:00:21.602: INFO: Pod "downwardapi-volume-ea4449ff-ff66-45d1-a02f-30ab01442de2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.177326587s Aug 26 23:00:23.610: INFO: Pod "downwardapi-volume-ea4449ff-ff66-45d1-a02f-30ab01442de2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.185387359s STEP: Saw pod success Aug 26 23:00:23.611: INFO: Pod "downwardapi-volume-ea4449ff-ff66-45d1-a02f-30ab01442de2" satisfied condition "success or failure" Aug 26 23:00:23.617: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-ea4449ff-ff66-45d1-a02f-30ab01442de2 container client-container: STEP: delete the pod Aug 26 23:00:23.675: INFO: Waiting for pod downwardapi-volume-ea4449ff-ff66-45d1-a02f-30ab01442de2 to disappear Aug 26 23:00:23.680: INFO: Pod downwardapi-volume-ea4449ff-ff66-45d1-a02f-30ab01442de2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:00:23.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8194" for this suite. Aug 26 23:00:29.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:00:29.884: INFO: namespace projected-8194 deletion completed in 6.19487025s • [SLOW TEST:10.548 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:00:29.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-cf81f080-9acf-4aa1-bcee-6bfdecf7a4de in namespace container-probe-5663 Aug 26 23:00:34.016: INFO: Started pod busybox-cf81f080-9acf-4aa1-bcee-6bfdecf7a4de in namespace container-probe-5663 STEP: checking the pod's current state and verifying that restartCount is present Aug 26 23:00:34.021: INFO: Initial restart count of pod busybox-cf81f080-9acf-4aa1-bcee-6bfdecf7a4de is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:04:35.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5663" for this suite. Aug 26 23:04:41.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:04:41.556: INFO: namespace container-probe-5663 deletion completed in 6.223838748s • [SLOW TEST:251.671 seconds] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:04:41.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Aug 26 23:04:54.139: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3560 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 26 23:04:54.139: INFO: >>> kubeConfig: /root/.kube/config I0826 23:04:54.206804 7 log.go:172] (0x400066c8f0) (0x4000c9a500) Create stream I0826 23:04:54.207229 7 log.go:172] (0x400066c8f0) (0x4000c9a500) Stream added, broadcasting: 1 I0826 23:04:54.223808 7 log.go:172] (0x400066c8f0) Reply frame received for 1 I0826 23:04:54.224441 7 log.go:172] (0x400066c8f0) (0x4001f3c000) Create stream I0826 23:04:54.224506 7 log.go:172] (0x400066c8f0) (0x4001f3c000) Stream added, broadcasting: 3 I0826 23:04:54.226438 7 log.go:172] (0x400066c8f0) Reply frame received for 3 I0826 23:04:54.226666 7 log.go:172] (0x400066c8f0) (0x4000c9a5a0) Create stream I0826 23:04:54.226720 7 log.go:172] (0x400066c8f0) (0x4000c9a5a0) Stream added, broadcasting: 5 I0826 23:04:54.227887 7 log.go:172] (0x400066c8f0) Reply frame received for 5 I0826 23:04:54.289961 7 log.go:172] (0x400066c8f0) Data frame received for 3 I0826 23:04:54.290257 7 log.go:172] (0x400066c8f0) Data frame received for 5 I0826 23:04:54.290385 7 log.go:172] (0x4000c9a5a0) (5) Data frame handling I0826 23:04:54.290469 7 log.go:172] (0x4001f3c000) (3) Data frame handling I0826 23:04:54.290893 7 log.go:172] (0x400066c8f0) Data frame received for 1 I0826 23:04:54.291022 7 log.go:172] (0x4000c9a500) (1) Data frame handling I0826 23:04:54.291684 7 log.go:172] (0x4000c9a500) (1) Data frame sent I0826 23:04:54.292051 7 log.go:172] (0x4001f3c000) (3) Data frame sent I0826 23:04:54.292188 7 log.go:172] (0x400066c8f0) Data frame received for 3 I0826 23:04:54.292305 7 log.go:172] (0x4001f3c000) (3) Data frame handling I0826 23:04:54.295325 7 log.go:172] (0x400066c8f0) (0x4000c9a500) Stream removed, broadcasting: 1 I0826 23:04:54.295952 7 log.go:172] (0x400066c8f0) Go away received I0826 23:04:54.297928 7 log.go:172] (0x400066c8f0) (0x4000c9a500) Stream removed, broadcasting: 1 I0826 23:04:54.298397 7 log.go:172] (0x400066c8f0) (0x4001f3c000) Stream removed, broadcasting: 3 I0826 23:04:54.298608 7 log.go:172] (0x400066c8f0) (0x4000c9a5a0) Stream removed, broadcasting: 5 Aug 26 23:04:54.299: INFO: Exec stderr: "" Aug 26 23:04:54.299: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3560 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 26 23:04:54.299: INFO: >>> kubeConfig: /root/.kube/config I0826 23:04:54.365494 7 log.go:172] (0x4000a54420) (0x4000f805a0) Create stream I0826 23:04:54.365700 7 log.go:172] (0x4000a54420) (0x4000f805a0) Stream added, broadcasting: 1 I0826 23:04:54.369296 7 log.go:172] (0x4000a54420) Reply frame received for 1 I0826 23:04:54.369452 7 log.go:172] (0x4000a54420) (0x4000f80780) Create stream I0826 23:04:54.369533 7 log.go:172] (0x4000a54420) (0x4000f80780) Stream added, broadcasting: 3 I0826 23:04:54.371466 7 log.go:172] (0x4000a54420) Reply frame received for 3 I0826 23:04:54.371662 7 log.go:172] (0x4000a54420) (0x4000f808c0) Create stream I0826 23:04:54.371770 7 log.go:172] (0x4000a54420) (0x4000f808c0) Stream added, broadcasting: 5 I0826 23:04:54.373624 7 log.go:172] (0x4000a54420) Reply frame received for 5 I0826 23:04:54.442902 7 log.go:172] (0x4000a54420) Data frame received for 3 I0826 23:04:54.443083 7 log.go:172] (0x4000f80780) (3) Data frame handling I0826 23:04:54.443234 7 log.go:172] (0x4000a54420) Data frame received for 5 I0826 23:04:54.443440 7 log.go:172] (0x4000f808c0) (5) Data frame handling I0826 23:04:54.443646 7 log.go:172] (0x4000f80780) (3) Data frame sent I0826 23:04:54.443837 7 log.go:172] (0x4000a54420) Data frame received for 3 I0826 23:04:54.444028 7 log.go:172] (0x4000f80780) (3) Data frame handling I0826 23:04:54.444210 7 log.go:172] (0x4000a54420) Data frame received for 1 I0826 23:04:54.444344 7 log.go:172] (0x4000f805a0) (1) Data frame handling I0826 23:04:54.444491 7 log.go:172] (0x4000f805a0) (1) Data frame sent I0826 23:04:54.444659 7 log.go:172] (0x4000a54420) (0x4000f805a0) Stream removed, broadcasting: 1 I0826 23:04:54.444887 7 log.go:172] (0x4000a54420) Go away received I0826 23:04:54.445373 7 log.go:172] (0x4000a54420) (0x4000f805a0) Stream removed, broadcasting: 1 I0826 23:04:54.445513 7 log.go:172] (0x4000a54420) (0x4000f80780) Stream removed, broadcasting: 3 I0826 23:04:54.445652 7 log.go:172] (0x4000a54420) (0x4000f808c0) Stream removed, broadcasting: 5 Aug 26 23:04:54.445: INFO: Exec stderr: "" Aug 26 23:04:54.446: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3560 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 26 23:04:54.446: INFO: >>> kubeConfig: /root/.kube/config I0826 23:04:54.508707 7 log.go:172] (0x400066d3f0) (0x4000c9a8c0) Create stream I0826 23:04:54.508908 7 log.go:172] (0x400066d3f0) (0x4000c9a8c0) Stream added, broadcasting: 1 I0826 23:04:54.513494 7 log.go:172] (0x400066d3f0) Reply frame received for 1 I0826 23:04:54.513773 7 log.go:172] (0x400066d3f0) (0x4000c9a960) Create stream I0826 23:04:54.513892 7 log.go:172] (0x400066d3f0) (0x4000c9a960) Stream added, broadcasting: 3 I0826 23:04:54.515832 7 log.go:172] (0x400066d3f0) Reply frame received for 3 I0826 23:04:54.515988 7 log.go:172] (0x400066d3f0) (0x4000c9aa00) Create stream I0826 23:04:54.516072 7 log.go:172] (0x400066d3f0) (0x4000c9aa00) Stream added, broadcasting: 5 I0826 23:04:54.517779 7 log.go:172] (0x400066d3f0) Reply frame received for 5 I0826 23:04:54.597014 7 log.go:172] (0x400066d3f0) Data frame received for 5 I0826 23:04:54.597201 7 log.go:172] (0x4000c9aa00) (5) Data frame handling I0826 23:04:54.597337 7 log.go:172] (0x400066d3f0) Data frame received for 3 I0826 23:04:54.597499 7 log.go:172] (0x4000c9a960) (3) Data frame handling I0826 23:04:54.597616 7 log.go:172] (0x4000c9a960) (3) Data frame sent I0826 23:04:54.597737 7 log.go:172] (0x400066d3f0) Data frame received for 3 I0826 23:04:54.597831 7 log.go:172] (0x4000c9a960) (3) Data frame handling I0826 23:04:54.598320 7 log.go:172] (0x400066d3f0) Data frame received for 1 I0826 23:04:54.598436 7 log.go:172] (0x4000c9a8c0) (1) Data frame handling I0826 23:04:54.598556 7 log.go:172] (0x4000c9a8c0) (1) Data frame sent I0826 23:04:54.598669 7 log.go:172] (0x400066d3f0) (0x4000c9a8c0) Stream removed, broadcasting: 1 I0826 23:04:54.598815 7 log.go:172] (0x400066d3f0) Go away received I0826 23:04:54.599177 7 log.go:172] (0x400066d3f0) (0x4000c9a8c0) Stream removed, broadcasting: 1 I0826 23:04:54.599294 7 log.go:172] (0x400066d3f0) (0x4000c9a960) Stream removed, broadcasting: 3 I0826 23:04:54.599402 7 log.go:172] (0x400066d3f0) (0x4000c9aa00) Stream removed, broadcasting: 5 Aug 26 23:04:54.599: INFO: Exec stderr: "" Aug 26 23:04:54.599: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3560 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 26 23:04:54.599: INFO: >>> kubeConfig: /root/.kube/config I0826 23:04:54.659207 7 log.go:172] (0x4000a553f0) (0x4000f80dc0) Create stream I0826 23:04:54.659395 7 log.go:172] (0x4000a553f0) (0x4000f80dc0) Stream added, broadcasting: 1 I0826 23:04:54.663499 7 log.go:172] (0x4000a553f0) Reply frame received for 1 I0826 23:04:54.663750 7 log.go:172] (0x4000a553f0) (0x400141c140) Create stream I0826 23:04:54.663884 7 log.go:172] (0x4000a553f0) (0x400141c140) Stream added, broadcasting: 3 I0826 23:04:54.665809 7 log.go:172] (0x4000a553f0) Reply frame received for 3 I0826 23:04:54.665953 7 log.go:172] (0x4000a553f0) (0x4000f80e60) Create stream I0826 23:04:54.666032 7 log.go:172] (0x4000a553f0) (0x4000f80e60) Stream added, broadcasting: 5 I0826 23:04:54.667729 7 log.go:172] (0x4000a553f0) Reply frame received for 5 I0826 23:04:54.733928 7 log.go:172] (0x4000a553f0) Data frame received for 5 I0826 23:04:54.734152 7 log.go:172] (0x4000f80e60) (5) Data frame handling I0826 23:04:54.734280 7 log.go:172] (0x4000a553f0) Data frame received for 3 I0826 23:04:54.734406 7 log.go:172] (0x400141c140) (3) Data frame handling I0826 23:04:54.734509 7 log.go:172] (0x400141c140) (3) Data frame sent I0826 23:04:54.734589 7 log.go:172] (0x4000a553f0) Data frame received for 3 I0826 23:04:54.734660 7 log.go:172] (0x400141c140) (3) Data frame handling I0826 23:04:54.735957 7 log.go:172] (0x4000a553f0) Data frame received for 1 I0826 23:04:54.736055 7 log.go:172] (0x4000f80dc0) (1) Data frame handling I0826 23:04:54.736151 7 log.go:172] (0x4000f80dc0) (1) Data frame sent I0826 23:04:54.736254 7 log.go:172] (0x4000a553f0) (0x4000f80dc0) Stream removed, broadcasting: 1 I0826 23:04:54.736381 7 log.go:172] (0x4000a553f0) Go away received I0826 23:04:54.736654 7 log.go:172] (0x4000a553f0) (0x4000f80dc0) Stream removed, broadcasting: 1 I0826 23:04:54.736950 7 log.go:172] (0x4000a553f0) (0x400141c140) Stream removed, broadcasting: 3 I0826 23:04:54.737067 7 log.go:172] (0x4000a553f0) (0x4000f80e60) Stream removed, broadcasting: 5 Aug 26 23:04:54.737: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Aug 26 23:04:54.737: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3560 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 26 23:04:54.737: INFO: >>> kubeConfig: /root/.kube/config I0826 23:04:54.814746 7 log.go:172] (0x40010400b0) (0x4000c9adc0) Create stream I0826 23:04:54.814940 7 log.go:172] (0x40010400b0) (0x4000c9adc0) Stream added, broadcasting: 1 I0826 23:04:54.821854 7 log.go:172] (0x40010400b0) Reply frame received for 1 I0826 23:04:54.822052 7 log.go:172] (0x40010400b0) (0x4001f3c140) Create stream I0826 23:04:54.822147 7 log.go:172] (0x40010400b0) (0x4001f3c140) Stream added, broadcasting: 3 I0826 23:04:54.825490 7 log.go:172] (0x40010400b0) Reply frame received for 3 I0826 23:04:54.825644 7 log.go:172] (0x40010400b0) (0x4000f80fa0) Create stream I0826 23:04:54.825719 7 log.go:172] (0x40010400b0) (0x4000f80fa0) Stream added, broadcasting: 5 I0826 23:04:54.827054 7 log.go:172] (0x40010400b0) Reply frame received for 5 I0826 23:04:54.897100 7 log.go:172] (0x40010400b0) Data frame received for 3 I0826 23:04:54.897341 7 log.go:172] (0x4001f3c140) (3) Data frame handling I0826 23:04:54.897534 7 log.go:172] (0x4001f3c140) (3) Data frame sent I0826 23:04:54.897772 7 log.go:172] (0x40010400b0) Data frame received for 3 I0826 23:04:54.897935 7 log.go:172] (0x4001f3c140) (3) Data frame handling I0826 23:04:54.898198 7 log.go:172] (0x40010400b0) Data frame received for 5 I0826 23:04:54.898385 7 log.go:172] (0x4000f80fa0) (5) Data frame handling I0826 23:04:54.898953 7 log.go:172] (0x40010400b0) Data frame received for 1 I0826 23:04:54.899123 7 log.go:172] (0x4000c9adc0) (1) Data frame handling I0826 23:04:54.899295 7 log.go:172] (0x4000c9adc0) (1) Data frame sent I0826 23:04:54.899445 7 log.go:172] (0x40010400b0) (0x4000c9adc0) Stream removed, broadcasting: 1 I0826 23:04:54.899646 7 log.go:172] (0x40010400b0) Go away received I0826 23:04:54.900109 7 log.go:172] (0x40010400b0) (0x4000c9adc0) Stream removed, broadcasting: 1 I0826 23:04:54.900276 7 log.go:172] (0x40010400b0) (0x4001f3c140) Stream removed, broadcasting: 3 I0826 23:04:54.900408 7 log.go:172] (0x40010400b0) (0x4000f80fa0) Stream removed, broadcasting: 5 Aug 26 23:04:54.900: INFO: Exec stderr: "" Aug 26 23:04:54.900: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3560 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 26 23:04:54.901: INFO: >>> kubeConfig: /root/.kube/config I0826 23:04:54.959331 7 log.go:172] (0x4000fd8b00) (0x4001f3c6e0) Create stream I0826 23:04:54.959493 7 log.go:172] (0x4000fd8b00) (0x4001f3c6e0) Stream added, broadcasting: 1 I0826 23:04:54.963304 7 log.go:172] (0x4000fd8b00) Reply frame received for 1 I0826 23:04:54.963540 7 log.go:172] (0x4000fd8b00) (0x4001f3c780) Create stream I0826 23:04:54.963616 7 log.go:172] (0x4000fd8b00) (0x4001f3c780) Stream added, broadcasting: 3 I0826 23:04:54.965636 7 log.go:172] (0x4000fd8b00) Reply frame received for 3 I0826 23:04:54.965792 7 log.go:172] (0x4000fd8b00) (0x400141c280) Create stream I0826 23:04:54.965868 7 log.go:172] (0x4000fd8b00) (0x400141c280) Stream added, broadcasting: 5 I0826 23:04:54.967396 7 log.go:172] (0x4000fd8b00) Reply frame received for 5 I0826 23:04:55.024907 7 log.go:172] (0x4000fd8b00) Data frame received for 3 I0826 23:04:55.025086 7 log.go:172] (0x4001f3c780) (3) Data frame handling I0826 23:04:55.025269 7 log.go:172] (0x4001f3c780) (3) Data frame sent I0826 23:04:55.025453 7 log.go:172] (0x4000fd8b00) Data frame received for 3 I0826 23:04:55.025630 7 log.go:172] (0x4001f3c780) (3) Data frame handling I0826 23:04:55.025822 7 log.go:172] (0x4000fd8b00) Data frame received for 5 I0826 23:04:55.025928 7 log.go:172] (0x400141c280) (5) Data frame handling I0826 23:04:55.026007 7 log.go:172] (0x4000fd8b00) Data frame received for 1 I0826 23:04:55.026124 7 log.go:172] (0x4001f3c6e0) (1) Data frame handling I0826 23:04:55.026242 7 log.go:172] (0x4001f3c6e0) (1) Data frame sent I0826 23:04:55.026367 7 log.go:172] (0x4000fd8b00) (0x4001f3c6e0) Stream removed, broadcasting: 1 I0826 23:04:55.026487 7 log.go:172] (0x4000fd8b00) Go away received I0826 23:04:55.026841 7 log.go:172] (0x4000fd8b00) (0x4001f3c6e0) Stream removed, broadcasting: 1 I0826 23:04:55.026946 7 log.go:172] (0x4000fd8b00) (0x4001f3c780) Stream removed, broadcasting: 3 I0826 23:04:55.027038 7 log.go:172] (0x4000fd8b00) (0x400141c280) Stream removed, broadcasting: 5 Aug 26 23:04:55.027: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Aug 26 23:04:55.027: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3560 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 26 23:04:55.027: INFO: >>> kubeConfig: /root/.kube/config I0826 23:04:55.094984 7 log.go:172] (0x4000fd9970) (0x4001f3cc80) Create stream I0826 23:04:55.095126 7 log.go:172] (0x4000fd9970) (0x4001f3cc80) Stream added, broadcasting: 1 I0826 23:04:55.099875 7 log.go:172] (0x4000fd9970) Reply frame received for 1 I0826 23:04:55.100115 7 log.go:172] (0x4000fd9970) (0x40009da0a0) Create stream I0826 23:04:55.100258 7 log.go:172] (0x4000fd9970) (0x40009da0a0) Stream added, broadcasting: 3 I0826 23:04:55.102049 7 log.go:172] (0x4000fd9970) Reply frame received for 3 I0826 23:04:55.102221 7 log.go:172] (0x4000fd9970) (0x4001f3ce60) Create stream I0826 23:04:55.102308 7 log.go:172] (0x4000fd9970) (0x4001f3ce60) Stream added, broadcasting: 5 I0826 23:04:55.103832 7 log.go:172] (0x4000fd9970) Reply frame received for 5 I0826 23:04:55.160198 7 log.go:172] (0x4000fd9970) Data frame received for 5 I0826 23:04:55.160323 7 log.go:172] (0x4001f3ce60) (5) Data frame handling I0826 23:04:55.160473 7 log.go:172] (0x4000fd9970) Data frame received for 3 I0826 23:04:55.160582 7 log.go:172] (0x40009da0a0) (3) Data frame handling I0826 23:04:55.160693 7 log.go:172] (0x40009da0a0) (3) Data frame sent I0826 23:04:55.160886 7 log.go:172] (0x4000fd9970) Data frame received for 3 I0826 23:04:55.161004 7 log.go:172] (0x40009da0a0) (3) Data frame handling I0826 23:04:55.162037 7 log.go:172] (0x4000fd9970) Data frame received for 1 I0826 23:04:55.162228 7 log.go:172] (0x4001f3cc80) (1) Data frame handling I0826 23:04:55.162400 7 log.go:172] (0x4001f3cc80) (1) Data frame sent I0826 23:04:55.162571 7 log.go:172] (0x4000fd9970) (0x4001f3cc80) Stream removed, broadcasting: 1 I0826 23:04:55.162768 7 log.go:172] (0x4000fd9970) Go away received I0826 23:04:55.163255 7 log.go:172] (0x4000fd9970) (0x4001f3cc80) Stream removed, broadcasting: 1 I0826 23:04:55.163365 7 log.go:172] (0x4000fd9970) (0x40009da0a0) Stream removed, broadcasting: 3 I0826 23:04:55.163455 7 log.go:172] (0x4000fd9970) (0x4001f3ce60) Stream removed, broadcasting: 5 Aug 26 23:04:55.163: INFO: Exec stderr: "" Aug 26 23:04:55.163: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3560 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 26 23:04:55.163: INFO: >>> kubeConfig: /root/.kube/config I0826 23:04:55.234583 7 log.go:172] (0x4000a2f1e0) (0x400141cc80) Create stream I0826 23:04:55.234776 7 log.go:172] (0x4000a2f1e0) (0x400141cc80) Stream added, broadcasting: 1 I0826 23:04:55.238528 7 log.go:172] (0x4000a2f1e0) Reply frame received for 1 I0826 23:04:55.238779 7 log.go:172] (0x4000a2f1e0) (0x400141cd20) Create stream I0826 23:04:55.238878 7 log.go:172] (0x4000a2f1e0) (0x400141cd20) Stream added, broadcasting: 3 I0826 23:04:55.240708 7 log.go:172] (0x4000a2f1e0) Reply frame received for 3 I0826 23:04:55.240958 7 log.go:172] (0x4000a2f1e0) (0x4001f3cf00) Create stream I0826 23:04:55.241039 7 log.go:172] (0x4000a2f1e0) (0x4001f3cf00) Stream added, broadcasting: 5 I0826 23:04:55.242378 7 log.go:172] (0x4000a2f1e0) Reply frame received for 5 I0826 23:04:55.307030 7 log.go:172] (0x4000a2f1e0) Data frame received for 5 I0826 23:04:55.307216 7 log.go:172] (0x4001f3cf00) (5) Data frame handling I0826 23:04:55.307326 7 log.go:172] (0x4000a2f1e0) Data frame received for 3 I0826 23:04:55.307446 7 log.go:172] (0x400141cd20) (3) Data frame handling I0826 23:04:55.307562 7 log.go:172] (0x400141cd20) (3) Data frame sent I0826 23:04:55.307673 7 log.go:172] (0x4000a2f1e0) Data frame received for 3 I0826 23:04:55.307766 7 log.go:172] (0x400141cd20) (3) Data frame handling I0826 23:04:55.308086 7 log.go:172] (0x4000a2f1e0) Data frame received for 1 I0826 23:04:55.308188 7 log.go:172] (0x400141cc80) (1) Data frame handling I0826 23:04:55.308291 7 log.go:172] (0x400141cc80) (1) Data frame sent I0826 23:04:55.308408 7 log.go:172] (0x4000a2f1e0) (0x400141cc80) Stream removed, broadcasting: 1 I0826 23:04:55.308523 7 log.go:172] (0x4000a2f1e0) Go away received I0826 23:04:55.309097 7 log.go:172] (0x4000a2f1e0) (0x400141cc80) Stream removed, broadcasting: 1 I0826 23:04:55.309284 7 log.go:172] (0x4000a2f1e0) (0x400141cd20) Stream removed, broadcasting: 3 I0826 23:04:55.309397 7 log.go:172] (0x4000a2f1e0) (0x4001f3cf00) Stream removed, broadcasting: 5 Aug 26 23:04:55.309: INFO: Exec stderr: "" Aug 26 23:04:55.309: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3560 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 26 23:04:55.310: INFO: >>> kubeConfig: /root/.kube/config I0826 23:04:55.368053 7 log.go:172] (0x4001378790) (0x40009db5e0) Create stream I0826 23:04:55.368206 7 log.go:172] (0x4001378790) (0x40009db5e0) Stream added, broadcasting: 1 I0826 23:04:55.371654 7 log.go:172] (0x4001378790) Reply frame received for 1 I0826 23:04:55.371879 7 log.go:172] (0x4001378790) (0x400141cfa0) Create stream I0826 23:04:55.371971 7 log.go:172] (0x4001378790) (0x400141cfa0) Stream added, broadcasting: 3 I0826 23:04:55.373649 7 log.go:172] (0x4001378790) Reply frame received for 3 I0826 23:04:55.373783 7 log.go:172] (0x4001378790) (0x400141d220) Create stream I0826 23:04:55.373858 7 log.go:172] (0x4001378790) (0x400141d220) Stream added, broadcasting: 5 I0826 23:04:55.375356 7 log.go:172] (0x4001378790) Reply frame received for 5 I0826 23:04:55.444379 7 log.go:172] (0x4001378790) Data frame received for 3 I0826 23:04:55.444514 7 log.go:172] (0x400141cfa0) (3) Data frame handling I0826 23:04:55.444607 7 log.go:172] (0x400141cfa0) (3) Data frame sent I0826 23:04:55.444697 7 log.go:172] (0x4001378790) Data frame received for 3 I0826 23:04:55.444897 7 log.go:172] (0x400141cfa0) (3) Data frame handling I0826 23:04:55.444997 7 log.go:172] (0x4001378790) Data frame received for 5 I0826 23:04:55.445146 7 log.go:172] (0x400141d220) (5) Data frame handling I0826 23:04:55.445550 7 log.go:172] (0x4001378790) Data frame received for 1 I0826 23:04:55.445610 7 log.go:172] (0x40009db5e0) (1) Data frame handling I0826 23:04:55.445700 7 log.go:172] (0x40009db5e0) (1) Data frame sent I0826 23:04:55.446075 7 log.go:172] (0x4001378790) (0x40009db5e0) Stream removed, broadcasting: 1 I0826 23:04:55.446260 7 log.go:172] (0x4001378790) Go away received I0826 23:04:55.446572 7 log.go:172] (0x4001378790) (0x40009db5e0) Stream removed, broadcasting: 1 I0826 23:04:55.446668 7 log.go:172] (0x4001378790) (0x400141cfa0) Stream removed, broadcasting: 3 I0826 23:04:55.446755 7 log.go:172] (0x4001378790) (0x400141d220) Stream removed, broadcasting: 5 Aug 26 23:04:55.446: INFO: Exec stderr: "" Aug 26 23:04:55.446: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3560 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 26 23:04:55.447: INFO: >>> kubeConfig: /root/.kube/config I0826 23:04:55.507076 7 log.go:172] (0x4000cbb1e0) (0x40017003c0) Create stream I0826 23:04:55.507258 7 log.go:172] (0x4000cbb1e0) (0x40017003c0) Stream added, broadcasting: 1 I0826 23:04:55.511205 7 log.go:172] (0x4000cbb1e0) Reply frame received for 1 I0826 23:04:55.511398 7 log.go:172] (0x4000cbb1e0) (0x4001700640) Create stream I0826 23:04:55.511503 7 log.go:172] (0x4000cbb1e0) (0x4001700640) Stream added, broadcasting: 3 I0826 23:04:55.513187 7 log.go:172] (0x4000cbb1e0) Reply frame received for 3 I0826 23:04:55.513338 7 log.go:172] (0x4000cbb1e0) (0x40017006e0) Create stream I0826 23:04:55.513427 7 log.go:172] (0x4000cbb1e0) (0x40017006e0) Stream added, broadcasting: 5 I0826 23:04:55.515198 7 log.go:172] (0x4000cbb1e0) Reply frame received for 5 I0826 23:04:55.603399 7 log.go:172] (0x4000cbb1e0) Data frame received for 3 I0826 23:04:55.603583 7 log.go:172] (0x4001700640) (3) Data frame handling I0826 23:04:55.603723 7 log.go:172] (0x4000cbb1e0) Data frame received for 5 I0826 23:04:55.603900 7 log.go:172] (0x40017006e0) (5) Data frame handling I0826 23:04:55.604101 7 log.go:172] (0x4001700640) (3) Data frame sent I0826 23:04:55.604206 7 log.go:172] (0x4000cbb1e0) Data frame received for 3 I0826 23:04:55.604310 7 log.go:172] (0x4001700640) (3) Data frame handling I0826 23:04:55.604895 7 log.go:172] (0x4000cbb1e0) Data frame received for 1 I0826 23:04:55.605048 7 log.go:172] (0x40017003c0) (1) Data frame handling I0826 23:04:55.605173 7 log.go:172] (0x40017003c0) (1) Data frame sent I0826 23:04:55.605308 7 log.go:172] (0x4000cbb1e0) (0x40017003c0) Stream removed, broadcasting: 1 I0826 23:04:55.605469 7 log.go:172] (0x4000cbb1e0) Go away received I0826 23:04:55.605869 7 log.go:172] (0x4000cbb1e0) (0x40017003c0) Stream removed, broadcasting: 1 I0826 23:04:55.606050 7 log.go:172] (0x4000cbb1e0) (0x4001700640) Stream removed, broadcasting: 3 I0826 23:04:55.606152 7 log.go:172] (0x4000cbb1e0) (0x40017006e0) Stream removed, broadcasting: 5 Aug 26 23:04:55.606: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:04:55.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-3560" for this suite. Aug 26 23:05:47.690: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:05:47.837: INFO: namespace e2e-kubelet-etc-hosts-3560 deletion completed in 52.221322913s • [SLOW TEST:66.277 seconds] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:05:47.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Aug 26 23:05:47.922: INFO: Waiting up to 5m0s for pod "downward-api-f22ac8f1-cf36-4b56-ac7e-06a946545b94" in namespace "downward-api-1492" to be "success or failure" Aug 26 23:05:47.954: INFO: Pod "downward-api-f22ac8f1-cf36-4b56-ac7e-06a946545b94": Phase="Pending", Reason="", readiness=false. Elapsed: 31.653013ms Aug 26 23:05:49.961: INFO: Pod "downward-api-f22ac8f1-cf36-4b56-ac7e-06a946545b94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038277413s Aug 26 23:05:51.968: INFO: Pod "downward-api-f22ac8f1-cf36-4b56-ac7e-06a946545b94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045626078s STEP: Saw pod success Aug 26 23:05:51.968: INFO: Pod "downward-api-f22ac8f1-cf36-4b56-ac7e-06a946545b94" satisfied condition "success or failure" Aug 26 23:05:51.974: INFO: Trying to get logs from node iruya-worker pod downward-api-f22ac8f1-cf36-4b56-ac7e-06a946545b94 container dapi-container: STEP: delete the pod Aug 26 23:05:52.207: INFO: Waiting for pod downward-api-f22ac8f1-cf36-4b56-ac7e-06a946545b94 to disappear Aug 26 23:05:52.361: INFO: Pod downward-api-f22ac8f1-cf36-4b56-ac7e-06a946545b94 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:05:52.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1492" for this suite. Aug 26 23:05:58.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:05:58.828: INFO: namespace downward-api-1492 deletion completed in 6.455857332s • [SLOW TEST:10.989 seconds] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:05:58.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Aug 26 23:06:02.948: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-20948d3d-58d4-4e23-8cd6-22bba62309d9,GenerateName:,Namespace:events-9237,SelfLink:/api/v1/namespaces/events-9237/pods/send-events-20948d3d-58d4-4e23-8cd6-22bba62309d9,UID:7ec72af5-b94a-4ee2-b210-d2a105cdcf33,ResourceVersion:3038374,Generation:0,CreationTimestamp:2020-08-26 23:05:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 902057190,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-q6t86 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q6t86,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-q6t86 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x4002cf4550} {node.kubernetes.io/unreachable Exists NoExecute 0x4002cf4570}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:05:58 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:06:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:06:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:05:58 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.42,StartTime:2020-08-26 23:05:58 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-08-26 23:06:02 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://bbccee9547cfee4616789814afffadf702cc25911584a61aad1bbfe3daa2cab0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Aug 26 23:06:04.965: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Aug 26 23:06:06.974: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:06:06.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9237" for this suite. Aug 26 23:06:45.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:06:45.153: INFO: namespace events-9237 deletion completed in 38.153533699s • [SLOW TEST:46.325 seconds] [k8s.io] [sig-node] Events /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:06:45.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1196.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-1196.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1196.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1196.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-1196.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1196.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 26 23:06:51.344: INFO: DNS probes using dns-1196/dns-test-4082350e-6f50-44f5-b7a8-9a4609ed4839 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:06:51.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1196" for this suite. Aug 26 23:06:57.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:06:57.566: INFO: namespace dns-1196 deletion completed in 6.165223275s • [SLOW TEST:12.409 seconds] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:06:57.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:07:23.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-536" for this suite. Aug 26 23:07:29.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:07:30.055: INFO: namespace namespaces-536 deletion completed in 6.182231605s STEP: Destroying namespace "nsdeletetest-1576" for this suite. Aug 26 23:07:30.059: INFO: Namespace nsdeletetest-1576 was already deleted STEP: Destroying namespace "nsdeletetest-3966" for this suite. Aug 26 23:07:36.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:07:36.201: INFO: namespace nsdeletetest-3966 deletion completed in 6.142100147s • [SLOW TEST:38.634 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:07:36.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-3b1ea9fd-463e-48b0-94b5-fff615ee2b1c STEP: Creating a pod to test consume configMaps Aug 26 23:07:36.317: INFO: Waiting up to 5m0s for pod "pod-configmaps-c541f34d-d277-43fe-8cdb-04c86e01bd0d" in namespace "configmap-3879" to be "success or failure" Aug 26 23:07:36.326: INFO: Pod "pod-configmaps-c541f34d-d277-43fe-8cdb-04c86e01bd0d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.230815ms Aug 26 23:07:38.345: INFO: Pod "pod-configmaps-c541f34d-d277-43fe-8cdb-04c86e01bd0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028709666s Aug 26 23:07:40.353: INFO: Pod "pod-configmaps-c541f34d-d277-43fe-8cdb-04c86e01bd0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036031468s STEP: Saw pod success Aug 26 23:07:40.353: INFO: Pod "pod-configmaps-c541f34d-d277-43fe-8cdb-04c86e01bd0d" satisfied condition "success or failure" Aug 26 23:07:40.357: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-c541f34d-d277-43fe-8cdb-04c86e01bd0d container configmap-volume-test: STEP: delete the pod Aug 26 23:07:40.409: INFO: Waiting for pod pod-configmaps-c541f34d-d277-43fe-8cdb-04c86e01bd0d to disappear Aug 26 23:07:40.421: INFO: Pod pod-configmaps-c541f34d-d277-43fe-8cdb-04c86e01bd0d no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:07:40.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3879" for this suite. Aug 26 23:07:46.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:07:46.583: INFO: namespace configmap-3879 deletion completed in 6.152850552s • [SLOW TEST:10.378 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:07:46.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 26 23:07:46.680: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2e150bda-726d-47c7-b481-3b0f3f2aca97" in namespace "projected-1210" to be "success or failure" Aug 26 23:07:46.752: INFO: Pod "downwardapi-volume-2e150bda-726d-47c7-b481-3b0f3f2aca97": Phase="Pending", Reason="", readiness=false. Elapsed: 71.015248ms Aug 26 23:07:48.812: INFO: Pod "downwardapi-volume-2e150bda-726d-47c7-b481-3b0f3f2aca97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131417097s Aug 26 23:07:50.818: INFO: Pod "downwardapi-volume-2e150bda-726d-47c7-b481-3b0f3f2aca97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.137772466s STEP: Saw pod success Aug 26 23:07:50.819: INFO: Pod "downwardapi-volume-2e150bda-726d-47c7-b481-3b0f3f2aca97" satisfied condition "success or failure" Aug 26 23:07:50.824: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-2e150bda-726d-47c7-b481-3b0f3f2aca97 container client-container: STEP: delete the pod Aug 26 23:07:50.927: INFO: Waiting for pod downwardapi-volume-2e150bda-726d-47c7-b481-3b0f3f2aca97 to disappear Aug 26 23:07:50.936: INFO: Pod downwardapi-volume-2e150bda-726d-47c7-b481-3b0f3f2aca97 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:07:50.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1210" for this suite. Aug 26 23:07:56.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:07:57.104: INFO: namespace projected-1210 deletion completed in 6.16183511s • [SLOW TEST:10.519 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:07:57.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Aug 26 23:07:57.258: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:08:06.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3022" for this suite. Aug 26 23:08:30.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:08:31.112: INFO: namespace init-container-3022 deletion completed in 24.141659994s • [SLOW TEST:34.005 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:08:31.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-4594d8b4-b8f1-4912-a9a5-2ea522eefc4a STEP: Creating a pod to test consume secrets Aug 26 23:08:31.246: INFO: Waiting up to 5m0s for pod "pod-secrets-3d757a27-fea9-40ff-bcfc-64044f0a95ce" in namespace "secrets-856" to be "success or failure" Aug 26 23:08:31.255: INFO: Pod "pod-secrets-3d757a27-fea9-40ff-bcfc-64044f0a95ce": Phase="Pending", Reason="", readiness=false. Elapsed: 8.61667ms Aug 26 23:08:33.262: INFO: Pod "pod-secrets-3d757a27-fea9-40ff-bcfc-64044f0a95ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015946535s Aug 26 23:08:35.270: INFO: Pod "pod-secrets-3d757a27-fea9-40ff-bcfc-64044f0a95ce": Phase="Running", Reason="", readiness=true. Elapsed: 4.023061451s Aug 26 23:08:37.277: INFO: Pod "pod-secrets-3d757a27-fea9-40ff-bcfc-64044f0a95ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030096328s STEP: Saw pod success Aug 26 23:08:37.277: INFO: Pod "pod-secrets-3d757a27-fea9-40ff-bcfc-64044f0a95ce" satisfied condition "success or failure" Aug 26 23:08:37.282: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-3d757a27-fea9-40ff-bcfc-64044f0a95ce container secret-volume-test: STEP: delete the pod Aug 26 23:08:37.311: INFO: Waiting for pod pod-secrets-3d757a27-fea9-40ff-bcfc-64044f0a95ce to disappear Aug 26 23:08:37.339: INFO: Pod pod-secrets-3d757a27-fea9-40ff-bcfc-64044f0a95ce no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:08:37.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-856" for this suite. Aug 26 23:08:43.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:08:44.055: INFO: namespace secrets-856 deletion completed in 6.707355871s • [SLOW TEST:12.943 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:08:44.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Aug 26 23:08:44.395: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-6707,SelfLink:/api/v1/namespaces/watch-6707/configmaps/e2e-watch-test-watch-closed,UID:58437be2-b0de-484c-8cd7-5638b5051904,ResourceVersion:3038904,Generation:0,CreationTimestamp:2020-08-26 23:08:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 26 23:08:44.398: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-6707,SelfLink:/api/v1/namespaces/watch-6707/configmaps/e2e-watch-test-watch-closed,UID:58437be2-b0de-484c-8cd7-5638b5051904,ResourceVersion:3038905,Generation:0,CreationTimestamp:2020-08-26 23:08:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Aug 26 23:08:44.467: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-6707,SelfLink:/api/v1/namespaces/watch-6707/configmaps/e2e-watch-test-watch-closed,UID:58437be2-b0de-484c-8cd7-5638b5051904,ResourceVersion:3038906,Generation:0,CreationTimestamp:2020-08-26 23:08:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 26 23:08:44.467: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-6707,SelfLink:/api/v1/namespaces/watch-6707/configmaps/e2e-watch-test-watch-closed,UID:58437be2-b0de-484c-8cd7-5638b5051904,ResourceVersion:3038907,Generation:0,CreationTimestamp:2020-08-26 23:08:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:08:44.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6707" for this suite. Aug 26 23:08:50.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:08:50.707: INFO: namespace watch-6707 deletion completed in 6.188927734s • [SLOW TEST:6.648 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:08:50.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-b3e55d94-54a1-45f9-891c-d72e479f0fe9 STEP: Creating a pod to test consume configMaps Aug 26 23:08:50.845: INFO: Waiting up to 5m0s for pod "pod-configmaps-1f0e4fec-0abb-4d5a-a1db-5af0a952fda3" in namespace "configmap-1908" to be "success or failure" Aug 26 23:08:50.854: INFO: Pod "pod-configmaps-1f0e4fec-0abb-4d5a-a1db-5af0a952fda3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.966764ms Aug 26 23:08:52.860: INFO: Pod "pod-configmaps-1f0e4fec-0abb-4d5a-a1db-5af0a952fda3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015286942s Aug 26 23:08:54.875: INFO: Pod "pod-configmaps-1f0e4fec-0abb-4d5a-a1db-5af0a952fda3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030198703s STEP: Saw pod success Aug 26 23:08:54.876: INFO: Pod "pod-configmaps-1f0e4fec-0abb-4d5a-a1db-5af0a952fda3" satisfied condition "success or failure" Aug 26 23:08:54.885: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-1f0e4fec-0abb-4d5a-a1db-5af0a952fda3 container configmap-volume-test: STEP: delete the pod Aug 26 23:08:54.945: INFO: Waiting for pod pod-configmaps-1f0e4fec-0abb-4d5a-a1db-5af0a952fda3 to disappear Aug 26 23:08:54.956: INFO: Pod pod-configmaps-1f0e4fec-0abb-4d5a-a1db-5af0a952fda3 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:08:54.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1908" for this suite. Aug 26 23:09:01.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:09:01.134: INFO: namespace configmap-1908 deletion completed in 6.167025825s • [SLOW TEST:10.424 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:09:01.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-bfb0c7e0-643b-4109-b4d3-3dbb82864806 STEP: Creating a pod to test consume secrets Aug 26 23:09:01.450: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-86810574-d860-45d7-be5b-e6034a10f6fb" in namespace "projected-2548" to be "success or failure" Aug 26 23:09:01.488: INFO: Pod "pod-projected-secrets-86810574-d860-45d7-be5b-e6034a10f6fb": Phase="Pending", Reason="", readiness=false. Elapsed: 37.786191ms Aug 26 23:09:03.495: INFO: Pod "pod-projected-secrets-86810574-d860-45d7-be5b-e6034a10f6fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04468832s Aug 26 23:09:05.749: INFO: Pod "pod-projected-secrets-86810574-d860-45d7-be5b-e6034a10f6fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.298130234s STEP: Saw pod success Aug 26 23:09:05.749: INFO: Pod "pod-projected-secrets-86810574-d860-45d7-be5b-e6034a10f6fb" satisfied condition "success or failure" Aug 26 23:09:05.772: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-86810574-d860-45d7-be5b-e6034a10f6fb container projected-secret-volume-test: STEP: delete the pod Aug 26 23:09:05.886: INFO: Waiting for pod pod-projected-secrets-86810574-d860-45d7-be5b-e6034a10f6fb to disappear Aug 26 23:09:05.926: INFO: Pod pod-projected-secrets-86810574-d860-45d7-be5b-e6034a10f6fb no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:09:05.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2548" for this suite. Aug 26 23:09:11.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:09:12.080: INFO: namespace projected-2548 deletion completed in 6.145473367s • [SLOW TEST:10.944 seconds] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:09:12.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Aug 26 23:09:16.730: INFO: Successfully updated pod "pod-update-activedeadlineseconds-1ad429cc-b33d-4028-a59e-d67740b0d875" Aug 26 23:09:16.731: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-1ad429cc-b33d-4028-a59e-d67740b0d875" in namespace "pods-6065" to be "terminated due to deadline exceeded" Aug 26 23:09:16.773: INFO: Pod "pod-update-activedeadlineseconds-1ad429cc-b33d-4028-a59e-d67740b0d875": Phase="Running", Reason="", readiness=true. Elapsed: 41.735587ms Aug 26 23:09:18.795: INFO: Pod "pod-update-activedeadlineseconds-1ad429cc-b33d-4028-a59e-d67740b0d875": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.064503157s Aug 26 23:09:18.796: INFO: Pod "pod-update-activedeadlineseconds-1ad429cc-b33d-4028-a59e-d67740b0d875" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:09:18.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6065" for this suite. Aug 26 23:09:24.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:09:24.968: INFO: namespace pods-6065 deletion completed in 6.160780864s • [SLOW TEST:12.887 seconds] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:09:24.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Aug 26 23:09:25.064: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix201066993/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:09:26.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3199" for this suite. Aug 26 23:09:32.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:09:32.330: INFO: namespace kubectl-3199 deletion completed in 6.136000679s • [SLOW TEST:7.361 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:09:32.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 26 23:09:32.452: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c07c0e6d-b260-4966-96ea-c6dc9fc84203" in namespace "projected-9427" to be "success or failure" Aug 26 23:09:32.497: INFO: Pod "downwardapi-volume-c07c0e6d-b260-4966-96ea-c6dc9fc84203": Phase="Pending", Reason="", readiness=false. Elapsed: 45.082898ms Aug 26 23:09:34.644: INFO: Pod "downwardapi-volume-c07c0e6d-b260-4966-96ea-c6dc9fc84203": Phase="Pending", Reason="", readiness=false. Elapsed: 2.191906337s Aug 26 23:09:36.650: INFO: Pod "downwardapi-volume-c07c0e6d-b260-4966-96ea-c6dc9fc84203": Phase="Running", Reason="", readiness=true. Elapsed: 4.198059466s Aug 26 23:09:38.658: INFO: Pod "downwardapi-volume-c07c0e6d-b260-4966-96ea-c6dc9fc84203": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.205601886s STEP: Saw pod success Aug 26 23:09:38.658: INFO: Pod "downwardapi-volume-c07c0e6d-b260-4966-96ea-c6dc9fc84203" satisfied condition "success or failure" Aug 26 23:09:38.662: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-c07c0e6d-b260-4966-96ea-c6dc9fc84203 container client-container: STEP: delete the pod Aug 26 23:09:38.715: INFO: Waiting for pod downwardapi-volume-c07c0e6d-b260-4966-96ea-c6dc9fc84203 to disappear Aug 26 23:09:38.747: INFO: Pod downwardapi-volume-c07c0e6d-b260-4966-96ea-c6dc9fc84203 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:09:38.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9427" for this suite. Aug 26 23:09:44.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:09:44.965: INFO: namespace projected-9427 deletion completed in 6.209896785s • [SLOW TEST:12.634 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:09:44.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-rpcz STEP: Creating a pod to test atomic-volume-subpath Aug 26 23:09:45.164: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-rpcz" in namespace "subpath-7311" to be "success or failure" Aug 26 23:09:45.174: INFO: Pod "pod-subpath-test-projected-rpcz": Phase="Pending", Reason="", readiness=false. Elapsed: 9.408857ms Aug 26 23:09:47.282: INFO: Pod "pod-subpath-test-projected-rpcz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117119903s Aug 26 23:09:49.289: INFO: Pod "pod-subpath-test-projected-rpcz": Phase="Running", Reason="", readiness=true. Elapsed: 4.124312816s Aug 26 23:09:51.295: INFO: Pod "pod-subpath-test-projected-rpcz": Phase="Running", Reason="", readiness=true. Elapsed: 6.129944466s Aug 26 23:09:53.301: INFO: Pod "pod-subpath-test-projected-rpcz": Phase="Running", Reason="", readiness=true. Elapsed: 8.136876139s Aug 26 23:09:55.308: INFO: Pod "pod-subpath-test-projected-rpcz": Phase="Running", Reason="", readiness=true. Elapsed: 10.143825433s Aug 26 23:09:57.314: INFO: Pod "pod-subpath-test-projected-rpcz": Phase="Running", Reason="", readiness=true. Elapsed: 12.149582967s Aug 26 23:09:59.320: INFO: Pod "pod-subpath-test-projected-rpcz": Phase="Running", Reason="", readiness=true. Elapsed: 14.155615905s Aug 26 23:10:01.327: INFO: Pod "pod-subpath-test-projected-rpcz": Phase="Running", Reason="", readiness=true. Elapsed: 16.16252364s Aug 26 23:10:03.335: INFO: Pod "pod-subpath-test-projected-rpcz": Phase="Running", Reason="", readiness=true. Elapsed: 18.170145685s Aug 26 23:10:05.341: INFO: Pod "pod-subpath-test-projected-rpcz": Phase="Running", Reason="", readiness=true. Elapsed: 20.176125889s Aug 26 23:10:07.349: INFO: Pod "pod-subpath-test-projected-rpcz": Phase="Running", Reason="", readiness=true. Elapsed: 22.184205209s Aug 26 23:10:09.356: INFO: Pod "pod-subpath-test-projected-rpcz": Phase="Running", Reason="", readiness=true. Elapsed: 24.191341087s Aug 26 23:10:11.365: INFO: Pod "pod-subpath-test-projected-rpcz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.20063662s STEP: Saw pod success Aug 26 23:10:11.365: INFO: Pod "pod-subpath-test-projected-rpcz" satisfied condition "success or failure" Aug 26 23:10:11.370: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-projected-rpcz container test-container-subpath-projected-rpcz: STEP: delete the pod Aug 26 23:10:11.406: INFO: Waiting for pod pod-subpath-test-projected-rpcz to disappear Aug 26 23:10:11.417: INFO: Pod pod-subpath-test-projected-rpcz no longer exists STEP: Deleting pod pod-subpath-test-projected-rpcz Aug 26 23:10:11.418: INFO: Deleting pod "pod-subpath-test-projected-rpcz" in namespace "subpath-7311" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:10:11.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7311" for this suite. Aug 26 23:10:17.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:10:17.578: INFO: namespace subpath-7311 deletion completed in 6.149207199s • [SLOW TEST:32.612 seconds] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:10:17.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-1a0da6fe-0eed-41f0-83d2-74f89f6f0638 [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:10:17.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4474" for this suite. Aug 26 23:10:23.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:10:23.897: INFO: namespace secrets-4474 deletion completed in 6.192253163s • [SLOW TEST:6.316 seconds] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:10:23.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:10:24.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8005" for this suite. Aug 26 23:10:30.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:10:30.279: INFO: namespace services-8005 deletion completed in 6.213933161s [AfterEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.381 seconds] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:10:30.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:10:34.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2757" for this suite. Aug 26 23:11:24.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:11:24.602: INFO: namespace kubelet-test-2757 deletion completed in 50.200650754s • [SLOW TEST:54.323 seconds] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:11:24.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Aug 26 23:11:24.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-9345' Aug 26 23:11:31.735: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Aug 26 23:11:31.735: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Aug 26 23:11:31.764: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-jt2ct] Aug 26 23:11:31.765: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-jt2ct" in namespace "kubectl-9345" to be "running and ready" Aug 26 23:11:31.801: INFO: Pod "e2e-test-nginx-rc-jt2ct": Phase="Pending", Reason="", readiness=false. Elapsed: 36.138659ms Aug 26 23:11:33.807: INFO: Pod "e2e-test-nginx-rc-jt2ct": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042622459s Aug 26 23:11:35.814: INFO: Pod "e2e-test-nginx-rc-jt2ct": Phase="Running", Reason="", readiness=true. Elapsed: 4.049632072s Aug 26 23:11:35.815: INFO: Pod "e2e-test-nginx-rc-jt2ct" satisfied condition "running and ready" Aug 26 23:11:35.815: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-jt2ct] Aug 26 23:11:35.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-9345' Aug 26 23:11:37.147: INFO: stderr: "" Aug 26 23:11:37.147: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Aug 26 23:11:37.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-9345' Aug 26 23:11:38.404: INFO: stderr: "" Aug 26 23:11:38.405: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:11:38.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9345" for this suite. Aug 26 23:11:44.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:11:44.571: INFO: namespace kubectl-9345 deletion completed in 6.156846266s • [SLOW TEST:19.969 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:11:44.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-84ccbbc3-2d17-4383-9ce5-bfce54adb37e STEP: Creating secret with name s-test-opt-upd-bf31bebf-a5bf-4f94-afc0-89e27971923c STEP: Creating the pod STEP: Deleting secret s-test-opt-del-84ccbbc3-2d17-4383-9ce5-bfce54adb37e STEP: Updating secret s-test-opt-upd-bf31bebf-a5bf-4f94-afc0-89e27971923c STEP: Creating secret with name s-test-opt-create-5055c8a7-f9cc-4c67-a955-869a577bedba STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:13:12.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9959" for this suite. Aug 26 23:13:36.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:13:36.833: INFO: namespace secrets-9959 deletion completed in 24.151358601s • [SLOW TEST:112.260 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:13:36.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Aug 26 23:13:37.076: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:13:45.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-564" for this suite. Aug 26 23:13:52.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:13:52.213: INFO: namespace init-container-564 deletion completed in 6.359535297s • [SLOW TEST:15.379 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:13:52.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Aug 26 23:13:52.311: INFO: Waiting up to 5m0s for pod "pod-a4666882-658f-40c0-afa5-fd1ee217309b" in namespace "emptydir-4376" to be "success or failure" Aug 26 23:13:52.317: INFO: Pod "pod-a4666882-658f-40c0-afa5-fd1ee217309b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.525926ms Aug 26 23:13:54.324: INFO: Pod "pod-a4666882-658f-40c0-afa5-fd1ee217309b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012568505s Aug 26 23:13:56.330: INFO: Pod "pod-a4666882-658f-40c0-afa5-fd1ee217309b": Phase="Running", Reason="", readiness=true. Elapsed: 4.018592074s Aug 26 23:13:58.335: INFO: Pod "pod-a4666882-658f-40c0-afa5-fd1ee217309b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.024244822s STEP: Saw pod success Aug 26 23:13:58.336: INFO: Pod "pod-a4666882-658f-40c0-afa5-fd1ee217309b" satisfied condition "success or failure" Aug 26 23:13:58.340: INFO: Trying to get logs from node iruya-worker2 pod pod-a4666882-658f-40c0-afa5-fd1ee217309b container test-container: STEP: delete the pod Aug 26 23:13:58.445: INFO: Waiting for pod pod-a4666882-658f-40c0-afa5-fd1ee217309b to disappear Aug 26 23:13:58.471: INFO: Pod pod-a4666882-658f-40c0-afa5-fd1ee217309b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:13:58.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4376" for this suite. Aug 26 23:14:04.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:14:04.644: INFO: namespace emptydir-4376 deletion completed in 6.164528926s • [SLOW TEST:12.427 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:14:04.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-e9c44d18-26fb-4751-b73c-280b1de3bf95 [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:14:04.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8377" for this suite. Aug 26 23:14:10.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:14:10.916: INFO: namespace configmap-8377 deletion completed in 6.162379747s • [SLOW TEST:6.271 seconds] [sig-node] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:14:10.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-3588 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 26 23:14:11.030: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Aug 26 23:14:37.228: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.50 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3588 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 26 23:14:37.228: INFO: >>> kubeConfig: /root/.kube/config I0826 23:14:37.285868 7 log.go:172] (0x4001ceefd0) (0x400351ce60) Create stream I0826 23:14:37.286086 7 log.go:172] (0x4001ceefd0) (0x400351ce60) Stream added, broadcasting: 1 I0826 23:14:37.293129 7 log.go:172] (0x4001ceefd0) Reply frame received for 1 I0826 23:14:37.293503 7 log.go:172] (0x4001ceefd0) (0x400349c140) Create stream I0826 23:14:37.293658 7 log.go:172] (0x4001ceefd0) (0x400349c140) Stream added, broadcasting: 3 I0826 23:14:37.297584 7 log.go:172] (0x4001ceefd0) Reply frame received for 3 I0826 23:14:37.297840 7 log.go:172] (0x4001ceefd0) (0x400349c1e0) Create stream I0826 23:14:37.297967 7 log.go:172] (0x4001ceefd0) (0x400349c1e0) Stream added, broadcasting: 5 I0826 23:14:37.302300 7 log.go:172] (0x4001ceefd0) Reply frame received for 5 I0826 23:14:38.392967 7 log.go:172] (0x4001ceefd0) Data frame received for 5 I0826 23:14:38.393148 7 log.go:172] (0x400349c1e0) (5) Data frame handling I0826 23:14:38.393374 7 log.go:172] (0x4001ceefd0) Data frame received for 3 I0826 23:14:38.393470 7 log.go:172] (0x400349c140) (3) Data frame handling I0826 23:14:38.393589 7 log.go:172] (0x400349c140) (3) Data frame sent I0826 23:14:38.393684 7 log.go:172] (0x4001ceefd0) Data frame received for 3 I0826 23:14:38.393834 7 log.go:172] (0x400349c140) (3) Data frame handling I0826 23:14:38.395160 7 log.go:172] (0x4001ceefd0) Data frame received for 1 I0826 23:14:38.395317 7 log.go:172] (0x400351ce60) (1) Data frame handling I0826 23:14:38.395459 7 log.go:172] (0x400351ce60) (1) Data frame sent I0826 23:14:38.395592 7 log.go:172] (0x4001ceefd0) (0x400351ce60) Stream removed, broadcasting: 1 I0826 23:14:38.395787 7 log.go:172] (0x4001ceefd0) Go away received I0826 23:14:38.396183 7 log.go:172] (0x4001ceefd0) (0x400351ce60) Stream removed, broadcasting: 1 I0826 23:14:38.396334 7 log.go:172] (0x4001ceefd0) (0x400349c140) Stream removed, broadcasting: 3 I0826 23:14:38.396455 7 log.go:172] (0x4001ceefd0) (0x400349c1e0) Stream removed, broadcasting: 5 Aug 26 23:14:38.397: INFO: Found all expected endpoints: [netserver-0] Aug 26 23:14:38.404: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.95 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3588 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 26 23:14:38.404: INFO: >>> kubeConfig: /root/.kube/config I0826 23:14:38.467198 7 log.go:172] (0x4001cef970) (0x400351d040) Create stream I0826 23:14:38.467358 7 log.go:172] (0x4001cef970) (0x400351d040) Stream added, broadcasting: 1 I0826 23:14:38.471641 7 log.go:172] (0x4001cef970) Reply frame received for 1 I0826 23:14:38.471874 7 log.go:172] (0x4001cef970) (0x4003309540) Create stream I0826 23:14:38.471980 7 log.go:172] (0x4001cef970) (0x4003309540) Stream added, broadcasting: 3 I0826 23:14:38.473662 7 log.go:172] (0x4001cef970) Reply frame received for 3 I0826 23:14:38.473806 7 log.go:172] (0x4001cef970) (0x400351d0e0) Create stream I0826 23:14:38.473883 7 log.go:172] (0x4001cef970) (0x400351d0e0) Stream added, broadcasting: 5 I0826 23:14:38.475414 7 log.go:172] (0x4001cef970) Reply frame received for 5 I0826 23:14:39.574284 7 log.go:172] (0x4001cef970) Data frame received for 5 I0826 23:14:39.574484 7 log.go:172] (0x400351d0e0) (5) Data frame handling I0826 23:14:39.574622 7 log.go:172] (0x4001cef970) Data frame received for 3 I0826 23:14:39.574776 7 log.go:172] (0x4003309540) (3) Data frame handling I0826 23:14:39.574894 7 log.go:172] (0x4003309540) (3) Data frame sent I0826 23:14:39.574981 7 log.go:172] (0x4001cef970) Data frame received for 3 I0826 23:14:39.575078 7 log.go:172] (0x4003309540) (3) Data frame handling I0826 23:14:39.575829 7 log.go:172] (0x4001cef970) Data frame received for 1 I0826 23:14:39.575932 7 log.go:172] (0x400351d040) (1) Data frame handling I0826 23:14:39.576034 7 log.go:172] (0x400351d040) (1) Data frame sent I0826 23:14:39.576135 7 log.go:172] (0x4001cef970) (0x400351d040) Stream removed, broadcasting: 1 I0826 23:14:39.576263 7 log.go:172] (0x4001cef970) Go away received I0826 23:14:39.576551 7 log.go:172] (0x4001cef970) (0x400351d040) Stream removed, broadcasting: 1 I0826 23:14:39.576651 7 log.go:172] (0x4001cef970) (0x4003309540) Stream removed, broadcasting: 3 I0826 23:14:39.576876 7 log.go:172] (0x4001cef970) (0x400351d0e0) Stream removed, broadcasting: 5 Aug 26 23:14:39.576: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:14:39.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3588" for this suite. Aug 26 23:15:05.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:15:05.796: INFO: namespace pod-network-test-3588 deletion completed in 26.163544804s • [SLOW TEST:54.879 seconds] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:15:05.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Aug 26 23:15:05.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1941' Aug 26 23:15:07.622: INFO: stderr: "" Aug 26 23:15:07.622: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Aug 26 23:15:08.631: INFO: Selector matched 1 pods for map[app:redis] Aug 26 23:15:08.632: INFO: Found 0 / 1 Aug 26 23:15:09.631: INFO: Selector matched 1 pods for map[app:redis] Aug 26 23:15:09.631: INFO: Found 0 / 1 Aug 26 23:15:10.630: INFO: Selector matched 1 pods for map[app:redis] Aug 26 23:15:10.631: INFO: Found 0 / 1 Aug 26 23:15:11.631: INFO: Selector matched 1 pods for map[app:redis] Aug 26 23:15:11.632: INFO: Found 1 / 1 Aug 26 23:15:11.633: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Aug 26 23:15:11.638: INFO: Selector matched 1 pods for map[app:redis] Aug 26 23:15:11.638: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Aug 26 23:15:11.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-zxn8m redis-master --namespace=kubectl-1941' Aug 26 23:15:12.943: INFO: stderr: "" Aug 26 23:15:12.943: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 26 Aug 23:15:10.466 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 26 Aug 23:15:10.467 # Server started, Redis version 3.2.12\n1:M 26 Aug 23:15:10.467 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 26 Aug 23:15:10.467 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Aug 26 23:15:12.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-zxn8m redis-master --namespace=kubectl-1941 --tail=1' Aug 26 23:15:14.222: INFO: stderr: "" Aug 26 23:15:14.222: INFO: stdout: "1:M 26 Aug 23:15:10.467 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Aug 26 23:15:14.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-zxn8m redis-master --namespace=kubectl-1941 --limit-bytes=1' Aug 26 23:15:15.546: INFO: stderr: "" Aug 26 23:15:15.546: INFO: stdout: " " STEP: exposing timestamps Aug 26 23:15:15.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-zxn8m redis-master --namespace=kubectl-1941 --tail=1 --timestamps' Aug 26 23:15:16.893: INFO: stderr: "" Aug 26 23:15:16.893: INFO: stdout: "2020-08-26T23:15:10.467244158Z 1:M 26 Aug 23:15:10.467 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Aug 26 23:15:19.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-zxn8m redis-master --namespace=kubectl-1941 --since=1s' Aug 26 23:15:20.760: INFO: stderr: "" Aug 26 23:15:20.760: INFO: stdout: "" Aug 26 23:15:20.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-zxn8m redis-master --namespace=kubectl-1941 --since=24h' Aug 26 23:15:22.096: INFO: stderr: "" Aug 26 23:15:22.096: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 26 Aug 23:15:10.466 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 26 Aug 23:15:10.467 # Server started, Redis version 3.2.12\n1:M 26 Aug 23:15:10.467 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 26 Aug 23:15:10.467 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Aug 26 23:15:22.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1941' Aug 26 23:15:23.337: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 26 23:15:23.337: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Aug 26 23:15:23.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-1941' Aug 26 23:15:24.668: INFO: stderr: "No resources found.\n" Aug 26 23:15:24.668: INFO: stdout: "" Aug 26 23:15:24.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-1941 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 26 23:15:25.939: INFO: stderr: "" Aug 26 23:15:25.939: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:15:25.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1941" for this suite. Aug 26 23:15:31.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:15:32.106: INFO: namespace kubectl-1941 deletion completed in 6.158539151s • [SLOW TEST:26.309 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:15:32.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Aug 26 23:15:32.275: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-996,SelfLink:/api/v1/namespaces/watch-996/configmaps/e2e-watch-test-configmap-a,UID:c6047e80-77a2-41ec-9f2c-f786e5298856,ResourceVersion:3040136,Generation:0,CreationTimestamp:2020-08-26 23:15:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 26 23:15:32.275: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-996,SelfLink:/api/v1/namespaces/watch-996/configmaps/e2e-watch-test-configmap-a,UID:c6047e80-77a2-41ec-9f2c-f786e5298856,ResourceVersion:3040136,Generation:0,CreationTimestamp:2020-08-26 23:15:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Aug 26 23:15:42.289: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-996,SelfLink:/api/v1/namespaces/watch-996/configmaps/e2e-watch-test-configmap-a,UID:c6047e80-77a2-41ec-9f2c-f786e5298856,ResourceVersion:3040156,Generation:0,CreationTimestamp:2020-08-26 23:15:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Aug 26 23:15:42.291: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-996,SelfLink:/api/v1/namespaces/watch-996/configmaps/e2e-watch-test-configmap-a,UID:c6047e80-77a2-41ec-9f2c-f786e5298856,ResourceVersion:3040156,Generation:0,CreationTimestamp:2020-08-26 23:15:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Aug 26 23:15:52.304: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-996,SelfLink:/api/v1/namespaces/watch-996/configmaps/e2e-watch-test-configmap-a,UID:c6047e80-77a2-41ec-9f2c-f786e5298856,ResourceVersion:3040176,Generation:0,CreationTimestamp:2020-08-26 23:15:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 26 23:15:52.306: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-996,SelfLink:/api/v1/namespaces/watch-996/configmaps/e2e-watch-test-configmap-a,UID:c6047e80-77a2-41ec-9f2c-f786e5298856,ResourceVersion:3040176,Generation:0,CreationTimestamp:2020-08-26 23:15:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Aug 26 23:16:02.315: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-996,SelfLink:/api/v1/namespaces/watch-996/configmaps/e2e-watch-test-configmap-a,UID:c6047e80-77a2-41ec-9f2c-f786e5298856,ResourceVersion:3040197,Generation:0,CreationTimestamp:2020-08-26 23:15:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 26 23:16:02.316: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-996,SelfLink:/api/v1/namespaces/watch-996/configmaps/e2e-watch-test-configmap-a,UID:c6047e80-77a2-41ec-9f2c-f786e5298856,ResourceVersion:3040197,Generation:0,CreationTimestamp:2020-08-26 23:15:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Aug 26 23:16:12.327: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-996,SelfLink:/api/v1/namespaces/watch-996/configmaps/e2e-watch-test-configmap-b,UID:86ed7978-0f3b-437d-a52f-bacda7a549ff,ResourceVersion:3040217,Generation:0,CreationTimestamp:2020-08-26 23:16:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 26 23:16:12.327: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-996,SelfLink:/api/v1/namespaces/watch-996/configmaps/e2e-watch-test-configmap-b,UID:86ed7978-0f3b-437d-a52f-bacda7a549ff,ResourceVersion:3040217,Generation:0,CreationTimestamp:2020-08-26 23:16:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Aug 26 23:16:22.337: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-996,SelfLink:/api/v1/namespaces/watch-996/configmaps/e2e-watch-test-configmap-b,UID:86ed7978-0f3b-437d-a52f-bacda7a549ff,ResourceVersion:3040238,Generation:0,CreationTimestamp:2020-08-26 23:16:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 26 23:16:22.338: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-996,SelfLink:/api/v1/namespaces/watch-996/configmaps/e2e-watch-test-configmap-b,UID:86ed7978-0f3b-437d-a52f-bacda7a549ff,ResourceVersion:3040238,Generation:0,CreationTimestamp:2020-08-26 23:16:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:16:32.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-996" for this suite. Aug 26 23:16:38.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:16:38.536: INFO: namespace watch-996 deletion completed in 6.187801587s • [SLOW TEST:66.428 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:16:38.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Aug 26 23:16:38.648: INFO: Waiting up to 5m0s for pod "pod-055d7bb2-135e-4564-aff4-3bf6453dae07" in namespace "emptydir-5723" to be "success or failure" Aug 26 23:16:38.679: INFO: Pod "pod-055d7bb2-135e-4564-aff4-3bf6453dae07": Phase="Pending", Reason="", readiness=false. Elapsed: 30.638309ms Aug 26 23:16:40.809: INFO: Pod "pod-055d7bb2-135e-4564-aff4-3bf6453dae07": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161389746s Aug 26 23:16:42.816: INFO: Pod "pod-055d7bb2-135e-4564-aff4-3bf6453dae07": Phase="Running", Reason="", readiness=true. Elapsed: 4.168039928s Aug 26 23:16:44.823: INFO: Pod "pod-055d7bb2-135e-4564-aff4-3bf6453dae07": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.174693508s STEP: Saw pod success Aug 26 23:16:44.823: INFO: Pod "pod-055d7bb2-135e-4564-aff4-3bf6453dae07" satisfied condition "success or failure" Aug 26 23:16:44.828: INFO: Trying to get logs from node iruya-worker2 pod pod-055d7bb2-135e-4564-aff4-3bf6453dae07 container test-container: STEP: delete the pod Aug 26 23:16:44.851: INFO: Waiting for pod pod-055d7bb2-135e-4564-aff4-3bf6453dae07 to disappear Aug 26 23:16:44.855: INFO: Pod pod-055d7bb2-135e-4564-aff4-3bf6453dae07 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:16:44.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5723" for this suite. Aug 26 23:16:50.879: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:16:51.024: INFO: namespace emptydir-5723 deletion completed in 6.161830825s • [SLOW TEST:12.487 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:16:51.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-ee31ec61-80bf-403b-af66-17b349504ae9 STEP: Creating a pod to test consume secrets Aug 26 23:16:51.170: INFO: Waiting up to 5m0s for pod "pod-secrets-a0d2b879-e13c-49f1-8ae4-9ee7875ac490" in namespace "secrets-2403" to be "success or failure" Aug 26 23:16:51.294: INFO: Pod "pod-secrets-a0d2b879-e13c-49f1-8ae4-9ee7875ac490": Phase="Pending", Reason="", readiness=false. Elapsed: 124.478524ms Aug 26 23:16:53.475: INFO: Pod "pod-secrets-a0d2b879-e13c-49f1-8ae4-9ee7875ac490": Phase="Pending", Reason="", readiness=false. Elapsed: 2.30472682s Aug 26 23:16:55.482: INFO: Pod "pod-secrets-a0d2b879-e13c-49f1-8ae4-9ee7875ac490": Phase="Running", Reason="", readiness=true. Elapsed: 4.31193286s Aug 26 23:16:57.509: INFO: Pod "pod-secrets-a0d2b879-e13c-49f1-8ae4-9ee7875ac490": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.338790574s STEP: Saw pod success Aug 26 23:16:57.509: INFO: Pod "pod-secrets-a0d2b879-e13c-49f1-8ae4-9ee7875ac490" satisfied condition "success or failure" Aug 26 23:16:57.513: INFO: Trying to get logs from node iruya-worker pod pod-secrets-a0d2b879-e13c-49f1-8ae4-9ee7875ac490 container secret-volume-test: STEP: delete the pod Aug 26 23:16:57.533: INFO: Waiting for pod pod-secrets-a0d2b879-e13c-49f1-8ae4-9ee7875ac490 to disappear Aug 26 23:16:57.563: INFO: Pod pod-secrets-a0d2b879-e13c-49f1-8ae4-9ee7875ac490 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:16:57.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2403" for this suite. Aug 26 23:17:03.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:17:03.913: INFO: namespace secrets-2403 deletion completed in 6.334695204s • [SLOW TEST:12.882 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:17:03.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0826 23:17:06.026282 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 26 23:17:06.026: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:17:06.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4960" for this suite. Aug 26 23:17:12.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:17:12.253: INFO: namespace gc-4960 deletion completed in 6.219651435s • [SLOW TEST:8.340 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:17:12.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-a92941a1-9f51-4cdd-b8b3-4d1984eed11f STEP: Creating secret with name secret-projected-all-test-volume-b1fabbe9-d493-45a0-84c2-fef31b77ae59 STEP: Creating a pod to test Check all projections for projected volume plugin Aug 26 23:17:12.380: INFO: Waiting up to 5m0s for pod "projected-volume-7897b09b-f040-4c7e-bf4a-14706a20d398" in namespace "projected-2134" to be "success or failure" Aug 26 23:17:12.393: INFO: Pod "projected-volume-7897b09b-f040-4c7e-bf4a-14706a20d398": Phase="Pending", Reason="", readiness=false. Elapsed: 12.947301ms Aug 26 23:17:14.400: INFO: Pod "projected-volume-7897b09b-f040-4c7e-bf4a-14706a20d398": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020035682s Aug 26 23:17:16.407: INFO: Pod "projected-volume-7897b09b-f040-4c7e-bf4a-14706a20d398": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026451386s Aug 26 23:17:18.414: INFO: Pod "projected-volume-7897b09b-f040-4c7e-bf4a-14706a20d398": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.033906544s STEP: Saw pod success Aug 26 23:17:18.414: INFO: Pod "projected-volume-7897b09b-f040-4c7e-bf4a-14706a20d398" satisfied condition "success or failure" Aug 26 23:17:18.420: INFO: Trying to get logs from node iruya-worker2 pod projected-volume-7897b09b-f040-4c7e-bf4a-14706a20d398 container projected-all-volume-test: STEP: delete the pod Aug 26 23:17:18.442: INFO: Waiting for pod projected-volume-7897b09b-f040-4c7e-bf4a-14706a20d398 to disappear Aug 26 23:17:18.446: INFO: Pod projected-volume-7897b09b-f040-4c7e-bf4a-14706a20d398 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:17:18.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2134" for this suite. Aug 26 23:17:24.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:17:24.617: INFO: namespace projected-2134 deletion completed in 6.164246485s • [SLOW TEST:12.362 seconds] [sig-storage] Projected combined /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:17:24.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Aug 26 23:17:31.497: INFO: 10 pods remaining Aug 26 23:17:31.498: INFO: 8 pods has nil DeletionTimestamp Aug 26 23:17:31.498: INFO: Aug 26 23:17:33.055: INFO: 0 pods remaining Aug 26 23:17:33.055: INFO: 0 pods has nil DeletionTimestamp Aug 26 23:17:33.055: INFO: Aug 26 23:17:34.755: INFO: 0 pods remaining Aug 26 23:17:34.755: INFO: 0 pods has nil DeletionTimestamp Aug 26 23:17:34.755: INFO: STEP: Gathering metrics W0826 23:17:35.607657 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 26 23:17:35.607: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:17:35.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1263" for this suite. Aug 26 23:17:41.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:17:41.913: INFO: namespace gc-1263 deletion completed in 6.298348696s • [SLOW TEST:17.294 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:17:41.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Aug 26 23:17:42.584: INFO: created pod pod-service-account-defaultsa Aug 26 23:17:42.585: INFO: pod pod-service-account-defaultsa service account token volume mount: true Aug 26 23:17:43.085: INFO: created pod pod-service-account-mountsa Aug 26 23:17:43.085: INFO: pod pod-service-account-mountsa service account token volume mount: true Aug 26 23:17:43.385: INFO: created pod pod-service-account-nomountsa Aug 26 23:17:43.385: INFO: pod pod-service-account-nomountsa service account token volume mount: false Aug 26 23:17:43.439: INFO: created pod pod-service-account-defaultsa-mountspec Aug 26 23:17:43.439: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Aug 26 23:17:43.571: INFO: created pod pod-service-account-mountsa-mountspec Aug 26 23:17:43.571: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Aug 26 23:17:43.990: INFO: created pod pod-service-account-nomountsa-mountspec Aug 26 23:17:43.990: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Aug 26 23:17:44.188: INFO: created pod pod-service-account-defaultsa-nomountspec Aug 26 23:17:44.188: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Aug 26 23:17:45.060: INFO: created pod pod-service-account-mountsa-nomountspec Aug 26 23:17:45.060: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Aug 26 23:17:45.578: INFO: created pod pod-service-account-nomountsa-nomountspec Aug 26 23:17:45.578: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:17:45.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6526" for this suite. Aug 26 23:18:14.696: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:18:14.885: INFO: namespace svcaccounts-6526 deletion completed in 29.254631535s • [SLOW TEST:32.969 seconds] [sig-auth] ServiceAccounts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:18:14.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Aug 26 23:18:15.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1284' Aug 26 23:18:16.722: INFO: stderr: "" Aug 26 23:18:16.723: INFO: stdout: "pod/pause created\n" Aug 26 23:18:16.723: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Aug 26 23:18:16.723: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1284" to be "running and ready" Aug 26 23:18:16.761: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 38.385755ms Aug 26 23:18:18.840: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116854608s Aug 26 23:18:20.851: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.128365589s Aug 26 23:18:22.858: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 6.135311399s Aug 26 23:18:22.859: INFO: Pod "pause" satisfied condition "running and ready" Aug 26 23:18:22.859: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Aug 26 23:18:22.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-1284' Aug 26 23:18:24.139: INFO: stderr: "" Aug 26 23:18:24.139: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Aug 26 23:18:24.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1284' Aug 26 23:18:25.417: INFO: stderr: "" Aug 26 23:18:25.417: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 9s testing-label-value\n" STEP: removing the label testing-label of a pod Aug 26 23:18:25.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-1284' Aug 26 23:18:26.729: INFO: stderr: "" Aug 26 23:18:26.729: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Aug 26 23:18:26.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1284' Aug 26 23:18:28.005: INFO: stderr: "" Aug 26 23:18:28.005: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 11s \n" [AfterEach] [k8s.io] Kubectl label /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Aug 26 23:18:28.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1284' Aug 26 23:18:29.366: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 26 23:18:29.366: INFO: stdout: "pod \"pause\" force deleted\n" Aug 26 23:18:29.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-1284' Aug 26 23:18:30.687: INFO: stderr: "No resources found.\n" Aug 26 23:18:30.688: INFO: stdout: "" Aug 26 23:18:30.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-1284 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 26 23:18:31.966: INFO: stderr: "" Aug 26 23:18:31.967: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:18:31.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1284" for this suite. Aug 26 23:18:38.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:18:38.154: INFO: namespace kubectl-1284 deletion completed in 6.177988726s • [SLOW TEST:23.266 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:18:38.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Aug 26 23:18:48.291: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 26 23:18:48.296: INFO: Pod pod-with-prestop-http-hook still exists Aug 26 23:18:50.296: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 26 23:18:50.303: INFO: Pod pod-with-prestop-http-hook still exists Aug 26 23:18:52.297: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 26 23:18:52.309: INFO: Pod pod-with-prestop-http-hook still exists Aug 26 23:18:54.297: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 26 23:18:54.304: INFO: Pod pod-with-prestop-http-hook still exists Aug 26 23:18:56.296: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 26 23:18:56.304: INFO: Pod pod-with-prestop-http-hook still exists Aug 26 23:18:58.297: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 26 23:18:58.304: INFO: Pod pod-with-prestop-http-hook still exists Aug 26 23:19:00.297: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 26 23:19:00.304: INFO: Pod pod-with-prestop-http-hook still exists Aug 26 23:19:02.297: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 26 23:19:02.304: INFO: Pod pod-with-prestop-http-hook still exists Aug 26 23:19:04.296: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 26 23:19:04.302: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:19:04.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7481" for this suite. Aug 26 23:19:26.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:19:26.475: INFO: namespace container-lifecycle-hook-7481 deletion completed in 22.156624427s • [SLOW TEST:48.320 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:19:26.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-96e208b5-539a-4573-b6a1-c7e3b1a805aa STEP: Creating a pod to test consume configMaps Aug 26 23:19:26.657: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-65544a9e-1343-474c-98aa-6307074ffddb" in namespace "projected-7301" to be "success or failure" Aug 26 23:19:26.680: INFO: Pod "pod-projected-configmaps-65544a9e-1343-474c-98aa-6307074ffddb": Phase="Pending", Reason="", readiness=false. Elapsed: 22.811804ms Aug 26 23:19:28.687: INFO: Pod "pod-projected-configmaps-65544a9e-1343-474c-98aa-6307074ffddb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029844194s Aug 26 23:19:30.694: INFO: Pod "pod-projected-configmaps-65544a9e-1343-474c-98aa-6307074ffddb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036318462s STEP: Saw pod success Aug 26 23:19:30.694: INFO: Pod "pod-projected-configmaps-65544a9e-1343-474c-98aa-6307074ffddb" satisfied condition "success or failure" Aug 26 23:19:30.698: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-65544a9e-1343-474c-98aa-6307074ffddb container projected-configmap-volume-test: STEP: delete the pod Aug 26 23:19:30.845: INFO: Waiting for pod pod-projected-configmaps-65544a9e-1343-474c-98aa-6307074ffddb to disappear Aug 26 23:19:30.869: INFO: Pod pod-projected-configmaps-65544a9e-1343-474c-98aa-6307074ffddb no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:19:30.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7301" for this suite. Aug 26 23:19:36.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:19:37.207: INFO: namespace projected-7301 deletion completed in 6.328360866s • [SLOW TEST:10.730 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:19:37.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-0a518a2b-ba37-49bd-a1eb-b42f2da23e83 STEP: Creating a pod to test consume secrets Aug 26 23:19:37.448: INFO: Waiting up to 5m0s for pod "pod-secrets-64e276ba-1748-477c-8432-08a7fcaf86f3" in namespace "secrets-7460" to be "success or failure" Aug 26 23:19:37.475: INFO: Pod "pod-secrets-64e276ba-1748-477c-8432-08a7fcaf86f3": Phase="Pending", Reason="", readiness=false. Elapsed: 26.169615ms Aug 26 23:19:39.482: INFO: Pod "pod-secrets-64e276ba-1748-477c-8432-08a7fcaf86f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033286305s Aug 26 23:19:41.561: INFO: Pod "pod-secrets-64e276ba-1748-477c-8432-08a7fcaf86f3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112319821s Aug 26 23:19:43.566: INFO: Pod "pod-secrets-64e276ba-1748-477c-8432-08a7fcaf86f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.117346668s STEP: Saw pod success Aug 26 23:19:43.566: INFO: Pod "pod-secrets-64e276ba-1748-477c-8432-08a7fcaf86f3" satisfied condition "success or failure" Aug 26 23:19:43.569: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-64e276ba-1748-477c-8432-08a7fcaf86f3 container secret-volume-test: STEP: delete the pod Aug 26 23:19:43.596: INFO: Waiting for pod pod-secrets-64e276ba-1748-477c-8432-08a7fcaf86f3 to disappear Aug 26 23:19:43.631: INFO: Pod pod-secrets-64e276ba-1748-477c-8432-08a7fcaf86f3 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:19:43.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7460" for this suite. Aug 26 23:19:52.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:19:52.746: INFO: namespace secrets-7460 deletion completed in 9.102157853s • [SLOW TEST:15.537 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:19:52.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 26 23:19:53.335: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6926e720-1f15-4b3a-b8b0-7ad0399481c3" in namespace "downward-api-7436" to be "success or failure" Aug 26 23:19:53.413: INFO: Pod "downwardapi-volume-6926e720-1f15-4b3a-b8b0-7ad0399481c3": Phase="Pending", Reason="", readiness=false. Elapsed: 77.642069ms Aug 26 23:19:55.420: INFO: Pod "downwardapi-volume-6926e720-1f15-4b3a-b8b0-7ad0399481c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084481543s Aug 26 23:19:57.427: INFO: Pod "downwardapi-volume-6926e720-1f15-4b3a-b8b0-7ad0399481c3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092177132s Aug 26 23:19:59.434: INFO: Pod "downwardapi-volume-6926e720-1f15-4b3a-b8b0-7ad0399481c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.098811441s STEP: Saw pod success Aug 26 23:19:59.434: INFO: Pod "downwardapi-volume-6926e720-1f15-4b3a-b8b0-7ad0399481c3" satisfied condition "success or failure" Aug 26 23:19:59.438: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-6926e720-1f15-4b3a-b8b0-7ad0399481c3 container client-container: STEP: delete the pod Aug 26 23:19:59.668: INFO: Waiting for pod downwardapi-volume-6926e720-1f15-4b3a-b8b0-7ad0399481c3 to disappear Aug 26 23:19:59.915: INFO: Pod downwardapi-volume-6926e720-1f15-4b3a-b8b0-7ad0399481c3 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:19:59.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7436" for this suite. Aug 26 23:20:05.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:20:06.105: INFO: namespace downward-api-7436 deletion completed in 6.181830548s • [SLOW TEST:13.357 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:20:06.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 26 23:20:06.270: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3ee33111-b6d5-4432-b1d8-a2d9696fecb4" in namespace "downward-api-183" to be "success or failure" Aug 26 23:20:06.285: INFO: Pod "downwardapi-volume-3ee33111-b6d5-4432-b1d8-a2d9696fecb4": Phase="Pending", Reason="", readiness=false. Elapsed: 14.554575ms Aug 26 23:20:08.493: INFO: Pod "downwardapi-volume-3ee33111-b6d5-4432-b1d8-a2d9696fecb4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222818506s Aug 26 23:20:10.645: INFO: Pod "downwardapi-volume-3ee33111-b6d5-4432-b1d8-a2d9696fecb4": Phase="Running", Reason="", readiness=true. Elapsed: 4.374070093s Aug 26 23:20:12.908: INFO: Pod "downwardapi-volume-3ee33111-b6d5-4432-b1d8-a2d9696fecb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.637474532s STEP: Saw pod success Aug 26 23:20:12.908: INFO: Pod "downwardapi-volume-3ee33111-b6d5-4432-b1d8-a2d9696fecb4" satisfied condition "success or failure" Aug 26 23:20:13.202: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-3ee33111-b6d5-4432-b1d8-a2d9696fecb4 container client-container: STEP: delete the pod Aug 26 23:20:13.826: INFO: Waiting for pod downwardapi-volume-3ee33111-b6d5-4432-b1d8-a2d9696fecb4 to disappear Aug 26 23:20:13.834: INFO: Pod downwardapi-volume-3ee33111-b6d5-4432-b1d8-a2d9696fecb4 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:20:13.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-183" for this suite. Aug 26 23:20:20.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:20:21.027: INFO: namespace downward-api-183 deletion completed in 7.185462736s • [SLOW TEST:14.920 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:20:21.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Aug 26 23:20:21.904: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 26 23:20:21.972: INFO: Waiting for terminating namespaces to be deleted... Aug 26 23:20:22.043: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Aug 26 23:20:22.059: INFO: kindnet-nkf5n from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded) Aug 26 23:20:22.060: INFO: Container kindnet-cni ready: true, restart count 0 Aug 26 23:20:22.060: INFO: daemon-set-6z8rp from daemonsets-4068 started at 2020-08-25 22:38:22 +0000 UTC (1 container statuses recorded) Aug 26 23:20:22.060: INFO: Container app ready: true, restart count 0 Aug 26 23:20:22.060: INFO: daemon-set-2gkvj from daemonsets-205 started at 2020-08-22 15:09:24 +0000 UTC (1 container statuses recorded) Aug 26 23:20:22.060: INFO: Container app ready: true, restart count 0 Aug 26 23:20:22.060: INFO: kube-proxy-5zw8s from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded) Aug 26 23:20:22.060: INFO: Container kube-proxy ready: true, restart count 0 Aug 26 23:20:22.060: INFO: daemon-set-qwbvn from daemonsets-4407 started at 2020-08-24 03:43:04 +0000 UTC (1 container statuses recorded) Aug 26 23:20:22.060: INFO: Container app ready: true, restart count 0 Aug 26 23:20:22.060: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Aug 26 23:20:22.073: INFO: kindnet-xsdzz from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded) Aug 26 23:20:22.073: INFO: Container kindnet-cni ready: true, restart count 0 Aug 26 23:20:22.073: INFO: daemon-set-hlzh5 from daemonsets-205 started at 2020-08-22 15:09:24 +0000 UTC (1 container statuses recorded) Aug 26 23:20:22.073: INFO: Container app ready: true, restart count 0 Aug 26 23:20:22.073: INFO: daemon-set-fzgmk from daemonsets-4068 started at 2020-08-25 22:38:22 +0000 UTC (1 container statuses recorded) Aug 26 23:20:22.073: INFO: Container app ready: true, restart count 0 Aug 26 23:20:22.073: INFO: kube-proxy-b98qt from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded) Aug 26 23:20:22.073: INFO: Container kube-proxy ready: true, restart count 0 Aug 26 23:20:22.073: INFO: daemon-set-nk8hf from daemonsets-4407 started at 2020-08-24 03:43:05 +0000 UTC (1 container statuses recorded) Aug 26 23:20:22.073: INFO: Container app ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 Aug 26 23:20:22.795: INFO: Pod daemon-set-2gkvj requesting resource cpu=0m on Node iruya-worker Aug 26 23:20:22.795: INFO: Pod daemon-set-hlzh5 requesting resource cpu=0m on Node iruya-worker2 Aug 26 23:20:22.795: INFO: Pod daemon-set-6z8rp requesting resource cpu=0m on Node iruya-worker Aug 26 23:20:22.795: INFO: Pod daemon-set-fzgmk requesting resource cpu=0m on Node iruya-worker2 Aug 26 23:20:22.796: INFO: Pod daemon-set-nk8hf requesting resource cpu=0m on Node iruya-worker2 Aug 26 23:20:22.796: INFO: Pod daemon-set-qwbvn requesting resource cpu=0m on Node iruya-worker Aug 26 23:20:22.796: INFO: Pod kindnet-nkf5n requesting resource cpu=100m on Node iruya-worker Aug 26 23:20:22.796: INFO: Pod kindnet-xsdzz requesting resource cpu=100m on Node iruya-worker2 Aug 26 23:20:22.796: INFO: Pod kube-proxy-5zw8s requesting resource cpu=0m on Node iruya-worker Aug 26 23:20:22.796: INFO: Pod kube-proxy-b98qt requesting resource cpu=0m on Node iruya-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-032c0237-afa6-4e80-a139-42cf0b4b30d1.162ef4bff73d569d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6231/filler-pod-032c0237-afa6-4e80-a139-42cf0b4b30d1 to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-032c0237-afa6-4e80-a139-42cf0b4b30d1.162ef4c070601c95], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-032c0237-afa6-4e80-a139-42cf0b4b30d1.162ef4c109adf557], Reason = [Created], Message = [Created container filler-pod-032c0237-afa6-4e80-a139-42cf0b4b30d1] STEP: Considering event: Type = [Normal], Name = [filler-pod-032c0237-afa6-4e80-a139-42cf0b4b30d1.162ef4c11b94b648], Reason = [Started], Message = [Started container filler-pod-032c0237-afa6-4e80-a139-42cf0b4b30d1] STEP: Considering event: Type = [Normal], Name = [filler-pod-4115fe20-ea5c-4714-9ddb-1ffd36d85e74.162ef4c0024639c6], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6231/filler-pod-4115fe20-ea5c-4714-9ddb-1ffd36d85e74 to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-4115fe20-ea5c-4714-9ddb-1ffd36d85e74.162ef4c06d983da2], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-4115fe20-ea5c-4714-9ddb-1ffd36d85e74.162ef4c109ae592b], Reason = [Created], Message = [Created container filler-pod-4115fe20-ea5c-4714-9ddb-1ffd36d85e74] STEP: Considering event: Type = [Normal], Name = [filler-pod-4115fe20-ea5c-4714-9ddb-1ffd36d85e74.162ef4c123dd3067], Reason = [Started], Message = [Started container filler-pod-4115fe20-ea5c-4714-9ddb-1ffd36d85e74] STEP: Considering event: Type = [Warning], Name = [additional-pod.162ef4c169c25110], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:20:30.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6231" for this suite. Aug 26 23:20:39.765: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:20:39.891: INFO: namespace sched-pred-6231 deletion completed in 9.133264519s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:18.863 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:20:39.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Aug 26 23:20:40.074: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6774" to be "success or failure" Aug 26 23:20:40.119: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 44.809321ms Aug 26 23:20:42.393: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.318886168s Aug 26 23:20:44.400: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.326566482s Aug 26 23:20:46.407: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.332856273s Aug 26 23:20:48.414: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.340084957s Aug 26 23:20:50.421: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.34747341s STEP: Saw pod success Aug 26 23:20:50.422: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Aug 26 23:20:50.610: INFO: Trying to get logs from node iruya-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Aug 26 23:20:50.688: INFO: Waiting for pod pod-host-path-test to disappear Aug 26 23:20:50.771: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:20:50.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-6774" for this suite. Aug 26 23:20:56.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:20:57.340: INFO: namespace hostpath-6774 deletion completed in 6.488230493s • [SLOW TEST:17.447 seconds] [sig-storage] HostPath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:20:57.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 26 23:20:57.561: INFO: Waiting up to 5m0s for pod "downwardapi-volume-228bec08-f83b-4c5f-8c97-53f2b4349dbe" in namespace "downward-api-2072" to be "success or failure" Aug 26 23:20:57.628: INFO: Pod "downwardapi-volume-228bec08-f83b-4c5f-8c97-53f2b4349dbe": Phase="Pending", Reason="", readiness=false. Elapsed: 66.841582ms Aug 26 23:20:59.993: INFO: Pod "downwardapi-volume-228bec08-f83b-4c5f-8c97-53f2b4349dbe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.432170365s Aug 26 23:21:01.998: INFO: Pod "downwardapi-volume-228bec08-f83b-4c5f-8c97-53f2b4349dbe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.436602462s Aug 26 23:21:04.004: INFO: Pod "downwardapi-volume-228bec08-f83b-4c5f-8c97-53f2b4349dbe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.442629785s STEP: Saw pod success Aug 26 23:21:04.004: INFO: Pod "downwardapi-volume-228bec08-f83b-4c5f-8c97-53f2b4349dbe" satisfied condition "success or failure" Aug 26 23:21:04.008: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-228bec08-f83b-4c5f-8c97-53f2b4349dbe container client-container: STEP: delete the pod Aug 26 23:21:04.163: INFO: Waiting for pod downwardapi-volume-228bec08-f83b-4c5f-8c97-53f2b4349dbe to disappear Aug 26 23:21:04.201: INFO: Pod downwardapi-volume-228bec08-f83b-4c5f-8c97-53f2b4349dbe no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:21:04.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2072" for this suite. Aug 26 23:21:12.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:21:12.557: INFO: namespace downward-api-2072 deletion completed in 8.345996153s • [SLOW TEST:15.216 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:21:12.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:21:21.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2858" for this suite. Aug 26 23:21:27.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:21:27.757: INFO: namespace watch-2858 deletion completed in 6.210146125s • [SLOW TEST:15.198 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:21:27.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 26 23:21:27.983: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c94a9ebf-d11e-4202-8ee7-48bbf4b10a10" in namespace "projected-9445" to be "success or failure" Aug 26 23:21:28.041: INFO: Pod "downwardapi-volume-c94a9ebf-d11e-4202-8ee7-48bbf4b10a10": Phase="Pending", Reason="", readiness=false. Elapsed: 58.339645ms Aug 26 23:21:30.048: INFO: Pod "downwardapi-volume-c94a9ebf-d11e-4202-8ee7-48bbf4b10a10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064965015s Aug 26 23:21:32.221: INFO: Pod "downwardapi-volume-c94a9ebf-d11e-4202-8ee7-48bbf4b10a10": Phase="Pending", Reason="", readiness=false. Elapsed: 4.237655179s Aug 26 23:21:34.245: INFO: Pod "downwardapi-volume-c94a9ebf-d11e-4202-8ee7-48bbf4b10a10": Phase="Pending", Reason="", readiness=false. Elapsed: 6.261594239s Aug 26 23:21:36.254: INFO: Pod "downwardapi-volume-c94a9ebf-d11e-4202-8ee7-48bbf4b10a10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.270468132s STEP: Saw pod success Aug 26 23:21:36.254: INFO: Pod "downwardapi-volume-c94a9ebf-d11e-4202-8ee7-48bbf4b10a10" satisfied condition "success or failure" Aug 26 23:21:36.258: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-c94a9ebf-d11e-4202-8ee7-48bbf4b10a10 container client-container: STEP: delete the pod Aug 26 23:21:36.430: INFO: Waiting for pod downwardapi-volume-c94a9ebf-d11e-4202-8ee7-48bbf4b10a10 to disappear Aug 26 23:21:36.621: INFO: Pod downwardapi-volume-c94a9ebf-d11e-4202-8ee7-48bbf4b10a10 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:21:36.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9445" for this suite. Aug 26 23:21:42.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:21:42.796: INFO: namespace projected-9445 deletion completed in 6.164254601s • [SLOW TEST:15.038 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:21:42.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Aug 26 23:21:50.228: INFO: Pod pod-hostip-8876c1a0-726a-4fdb-af21-aa59a0f37bc7 has hostIP: 172.18.0.5 [AfterEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:21:50.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7217" for this suite. Aug 26 23:22:14.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:22:15.533: INFO: namespace pods-7217 deletion completed in 25.297594548s • [SLOW TEST:32.736 seconds] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:22:15.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Aug 26 23:22:16.619: INFO: Waiting up to 5m0s for pod "pod-446aacce-5ead-4852-a1ae-2c1d058c1a78" in namespace "emptydir-6548" to be "success or failure" Aug 26 23:22:17.073: INFO: Pod "pod-446aacce-5ead-4852-a1ae-2c1d058c1a78": Phase="Pending", Reason="", readiness=false. Elapsed: 454.441249ms Aug 26 23:22:19.080: INFO: Pod "pod-446aacce-5ead-4852-a1ae-2c1d058c1a78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.461778948s Aug 26 23:22:21.087: INFO: Pod "pod-446aacce-5ead-4852-a1ae-2c1d058c1a78": Phase="Pending", Reason="", readiness=false. Elapsed: 4.4683348s Aug 26 23:22:23.144: INFO: Pod "pod-446aacce-5ead-4852-a1ae-2c1d058c1a78": Phase="Pending", Reason="", readiness=false. Elapsed: 6.525006854s Aug 26 23:22:25.683: INFO: Pod "pod-446aacce-5ead-4852-a1ae-2c1d058c1a78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.064464647s STEP: Saw pod success Aug 26 23:22:25.683: INFO: Pod "pod-446aacce-5ead-4852-a1ae-2c1d058c1a78" satisfied condition "success or failure" Aug 26 23:22:25.731: INFO: Trying to get logs from node iruya-worker2 pod pod-446aacce-5ead-4852-a1ae-2c1d058c1a78 container test-container: STEP: delete the pod Aug 26 23:22:25.952: INFO: Waiting for pod pod-446aacce-5ead-4852-a1ae-2c1d058c1a78 to disappear Aug 26 23:22:26.443: INFO: Pod pod-446aacce-5ead-4852-a1ae-2c1d058c1a78 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:22:26.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6548" for this suite. Aug 26 23:22:32.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:22:33.008: INFO: namespace emptydir-6548 deletion completed in 6.555520838s • [SLOW TEST:17.473 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:22:33.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Aug 26 23:22:34.606: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1732,SelfLink:/api/v1/namespaces/watch-1732/configmaps/e2e-watch-test-label-changed,UID:6d1d02d1-1bb0-4371-b3b4-754d9b5c5e95,ResourceVersion:3041765,Generation:0,CreationTimestamp:2020-08-26 23:22:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 26 23:22:34.608: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1732,SelfLink:/api/v1/namespaces/watch-1732/configmaps/e2e-watch-test-label-changed,UID:6d1d02d1-1bb0-4371-b3b4-754d9b5c5e95,ResourceVersion:3041766,Generation:0,CreationTimestamp:2020-08-26 23:22:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Aug 26 23:22:34.608: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1732,SelfLink:/api/v1/namespaces/watch-1732/configmaps/e2e-watch-test-label-changed,UID:6d1d02d1-1bb0-4371-b3b4-754d9b5c5e95,ResourceVersion:3041767,Generation:0,CreationTimestamp:2020-08-26 23:22:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Aug 26 23:22:44.720: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1732,SelfLink:/api/v1/namespaces/watch-1732/configmaps/e2e-watch-test-label-changed,UID:6d1d02d1-1bb0-4371-b3b4-754d9b5c5e95,ResourceVersion:3041787,Generation:0,CreationTimestamp:2020-08-26 23:22:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 26 23:22:44.721: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1732,SelfLink:/api/v1/namespaces/watch-1732/configmaps/e2e-watch-test-label-changed,UID:6d1d02d1-1bb0-4371-b3b4-754d9b5c5e95,ResourceVersion:3041789,Generation:0,CreationTimestamp:2020-08-26 23:22:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Aug 26 23:22:44.721: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1732,SelfLink:/api/v1/namespaces/watch-1732/configmaps/e2e-watch-test-label-changed,UID:6d1d02d1-1bb0-4371-b3b4-754d9b5c5e95,ResourceVersion:3041790,Generation:0,CreationTimestamp:2020-08-26 23:22:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:22:44.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1732" for this suite. Aug 26 23:22:50.771: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:22:50.907: INFO: namespace watch-1732 deletion completed in 6.174351442s • [SLOW TEST:17.897 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:22:50.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 26 23:22:51.473: INFO: Create a RollingUpdate DaemonSet Aug 26 23:22:51.479: INFO: Check that daemon pods launch on every node of the cluster Aug 26 23:22:51.674: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 23:22:51.981: INFO: Number of nodes with available pods: 0 Aug 26 23:22:51.981: INFO: Node iruya-worker is running more than one daemon pod Aug 26 23:22:53.260: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 23:22:53.306: INFO: Number of nodes with available pods: 0 Aug 26 23:22:53.307: INFO: Node iruya-worker is running more than one daemon pod Aug 26 23:22:53.995: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 23:22:54.001: INFO: Number of nodes with available pods: 0 Aug 26 23:22:54.001: INFO: Node iruya-worker is running more than one daemon pod Aug 26 23:22:54.994: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 23:22:55.000: INFO: Number of nodes with available pods: 0 Aug 26 23:22:55.000: INFO: Node iruya-worker is running more than one daemon pod Aug 26 23:22:56.442: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 23:22:56.449: INFO: Number of nodes with available pods: 0 Aug 26 23:22:56.449: INFO: Node iruya-worker is running more than one daemon pod Aug 26 23:22:56.994: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 23:22:57.001: INFO: Number of nodes with available pods: 0 Aug 26 23:22:57.001: INFO: Node iruya-worker is running more than one daemon pod Aug 26 23:22:58.004: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 23:22:58.012: INFO: Number of nodes with available pods: 0 Aug 26 23:22:58.012: INFO: Node iruya-worker is running more than one daemon pod Aug 26 23:22:59.024: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 23:22:59.030: INFO: Number of nodes with available pods: 2 Aug 26 23:22:59.030: INFO: Number of running nodes: 2, number of available pods: 2 Aug 26 23:22:59.031: INFO: Update the DaemonSet to trigger a rollout Aug 26 23:22:59.042: INFO: Updating DaemonSet daemon-set Aug 26 23:23:14.337: INFO: Roll back the DaemonSet before rollout is complete Aug 26 23:23:14.346: INFO: Updating DaemonSet daemon-set Aug 26 23:23:14.346: INFO: Make sure DaemonSet rollback is complete Aug 26 23:23:14.356: INFO: Wrong image for pod: daemon-set-9lps9. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Aug 26 23:23:14.356: INFO: Pod daemon-set-9lps9 is not available Aug 26 23:23:14.725: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 23:23:15.859: INFO: Wrong image for pod: daemon-set-9lps9. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Aug 26 23:23:15.859: INFO: Pod daemon-set-9lps9 is not available Aug 26 23:23:16.078: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 23:23:16.750: INFO: Wrong image for pod: daemon-set-9lps9. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Aug 26 23:23:16.750: INFO: Pod daemon-set-9lps9 is not available Aug 26 23:23:16.761: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 23:23:17.733: INFO: Wrong image for pod: daemon-set-9lps9. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Aug 26 23:23:17.733: INFO: Pod daemon-set-9lps9 is not available Aug 26 23:23:17.744: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 23:23:18.734: INFO: Wrong image for pod: daemon-set-9lps9. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Aug 26 23:23:18.734: INFO: Pod daemon-set-9lps9 is not available Aug 26 23:23:18.742: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 23:23:19.876: INFO: Wrong image for pod: daemon-set-9lps9. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Aug 26 23:23:19.876: INFO: Pod daemon-set-9lps9 is not available Aug 26 23:23:19.966: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 23:23:20.732: INFO: Wrong image for pod: daemon-set-9lps9. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Aug 26 23:23:20.732: INFO: Pod daemon-set-9lps9 is not available Aug 26 23:23:20.740: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 23:23:21.733: INFO: Wrong image for pod: daemon-set-9lps9. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Aug 26 23:23:21.733: INFO: Pod daemon-set-9lps9 is not available Aug 26 23:23:21.740: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 23:23:22.733: INFO: Wrong image for pod: daemon-set-9lps9. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Aug 26 23:23:22.733: INFO: Pod daemon-set-9lps9 is not available Aug 26 23:23:22.741: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 26 23:23:23.732: INFO: Pod daemon-set-sg9xl is not available Aug 26 23:23:23.740: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7021, will wait for the garbage collector to delete the pods Aug 26 23:23:23.815: INFO: Deleting DaemonSet.extensions daemon-set took: 7.146216ms Aug 26 23:23:24.116: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.782031ms Aug 26 23:23:28.323: INFO: Number of nodes with available pods: 0 Aug 26 23:23:28.323: INFO: Number of running nodes: 0, number of available pods: 0 Aug 26 23:23:28.334: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7021/daemonsets","resourceVersion":"3041949"},"items":null} Aug 26 23:23:28.337: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7021/pods","resourceVersion":"3041949"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:23:28.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7021" for this suite. Aug 26 23:23:34.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:23:34.529: INFO: namespace daemonsets-7021 deletion completed in 6.172136196s • [SLOW TEST:43.622 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:23:34.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Aug 26 23:23:34.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6756 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Aug 26 23:23:49.109: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0826 23:23:48.954070 1511 log.go:172] (0x400088e0b0) (0x40008c85a0) Create stream\nI0826 23:23:48.957708 1511 log.go:172] (0x400088e0b0) (0x40008c85a0) Stream added, broadcasting: 1\nI0826 23:23:48.968418 1511 log.go:172] (0x400088e0b0) Reply frame received for 1\nI0826 23:23:48.969946 1511 log.go:172] (0x400088e0b0) (0x40008c8640) Create stream\nI0826 23:23:48.970101 1511 log.go:172] (0x400088e0b0) (0x40008c8640) Stream added, broadcasting: 3\nI0826 23:23:48.972472 1511 log.go:172] (0x400088e0b0) Reply frame received for 3\nI0826 23:23:48.973123 1511 log.go:172] (0x400088e0b0) (0x40005a0140) Create stream\nI0826 23:23:48.973245 1511 log.go:172] (0x400088e0b0) (0x40005a0140) Stream added, broadcasting: 5\nI0826 23:23:48.975234 1511 log.go:172] (0x400088e0b0) Reply frame received for 5\nI0826 23:23:48.975746 1511 log.go:172] (0x400088e0b0) (0x40008c86e0) Create stream\nI0826 23:23:48.975850 1511 log.go:172] (0x400088e0b0) (0x40008c86e0) Stream added, broadcasting: 7\nI0826 23:23:48.977366 1511 log.go:172] (0x400088e0b0) Reply frame received for 7\nI0826 23:23:48.980503 1511 log.go:172] (0x40008c8640) (3) Writing data frame\nI0826 23:23:48.981563 1511 log.go:172] (0x40008c8640) (3) Writing data frame\nI0826 23:23:48.982811 1511 log.go:172] (0x400088e0b0) Data frame received for 5\nI0826 23:23:48.983016 1511 log.go:172] (0x40005a0140) (5) Data frame handling\nI0826 23:23:48.983422 1511 log.go:172] (0x40005a0140) (5) Data frame sent\nI0826 23:23:48.983839 1511 log.go:172] (0x400088e0b0) Data frame received for 5\nI0826 23:23:48.983918 1511 log.go:172] (0x40005a0140) (5) Data frame handling\nI0826 23:23:48.984036 1511 log.go:172] (0x40005a0140) (5) Data frame sent\nI0826 23:23:49.031588 1511 log.go:172] (0x400088e0b0) Data frame received for 7\nI0826 23:23:49.031798 1511 log.go:172] (0x40008c86e0) (7) Data frame handling\nI0826 23:23:49.031948 1511 log.go:172] (0x400088e0b0) Data frame received for 5\nI0826 23:23:49.032098 1511 log.go:172] (0x40005a0140) (5) Data frame handling\nI0826 23:23:49.032514 1511 log.go:172] (0x400088e0b0) Data frame received for 1\nI0826 23:23:49.032714 1511 log.go:172] (0x40008c85a0) (1) Data frame handling\nI0826 23:23:49.033017 1511 log.go:172] (0x40008c85a0) (1) Data frame sent\nI0826 23:23:49.034052 1511 log.go:172] (0x400088e0b0) (0x40008c85a0) Stream removed, broadcasting: 1\nI0826 23:23:49.034533 1511 log.go:172] (0x400088e0b0) (0x40008c8640) Stream removed, broadcasting: 3\nI0826 23:23:49.037316 1511 log.go:172] (0x400088e0b0) (0x40008c85a0) Stream removed, broadcasting: 1\nI0826 23:23:49.038890 1511 log.go:172] (0x400088e0b0) Go away received\nI0826 23:23:49.039143 1511 log.go:172] (0x400088e0b0) (0x40008c8640) Stream removed, broadcasting: 3\nI0826 23:23:49.039308 1511 log.go:172] (0x400088e0b0) (0x40005a0140) Stream removed, broadcasting: 5\nI0826 23:23:49.040369 1511 log.go:172] (0x400088e0b0) (0x40008c86e0) Stream removed, broadcasting: 7\n" Aug 26 23:23:49.111: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:23:51.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6756" for this suite. Aug 26 23:23:57.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:23:57.478: INFO: namespace kubectl-6756 deletion completed in 6.3467895s • [SLOW TEST:22.948 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:23:57.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:24:02.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2636" for this suite. Aug 26 23:24:26.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:24:26.802: INFO: namespace replication-controller-2636 deletion completed in 24.16077844s • [SLOW TEST:29.320 seconds] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:24:26.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Aug 26 23:24:37.015: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 26 23:24:37.066: INFO: Pod pod-with-poststart-http-hook still exists Aug 26 23:24:39.067: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 26 23:24:39.074: INFO: Pod pod-with-poststart-http-hook still exists Aug 26 23:24:41.067: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 26 23:24:41.073: INFO: Pod pod-with-poststart-http-hook still exists Aug 26 23:24:43.067: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 26 23:24:43.074: INFO: Pod pod-with-poststart-http-hook still exists Aug 26 23:24:45.067: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 26 23:24:45.073: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:24:45.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9684" for this suite. Aug 26 23:25:09.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:25:09.232: INFO: namespace container-lifecycle-hook-9684 deletion completed in 24.14851506s • [SLOW TEST:42.429 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:25:09.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-9151, will wait for the garbage collector to delete the pods Aug 26 23:25:15.589: INFO: Deleting Job.batch foo took: 9.002287ms Aug 26 23:25:15.890: INFO: Terminating Job.batch foo pods took: 301.04863ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:25:53.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9151" for this suite. Aug 26 23:26:01.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:26:01.955: INFO: namespace job-9151 deletion completed in 8.16643055s • [SLOW TEST:52.722 seconds] [sig-apps] Job /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:26:01.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-28b3b122-90b1-462e-a43b-689ce2b61bdb STEP: Creating secret with name s-test-opt-upd-1b9edb63-7760-4edb-a4b5-7a75d7dd2931 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-28b3b122-90b1-462e-a43b-689ce2b61bdb STEP: Updating secret s-test-opt-upd-1b9edb63-7760-4edb-a4b5-7a75d7dd2931 STEP: Creating secret with name s-test-opt-create-2139ea95-0036-4858-85b2-a982574fbbcd STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:27:36.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4973" for this suite. Aug 26 23:28:00.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:28:00.256: INFO: namespace projected-4973 deletion completed in 24.198710341s • [SLOW TEST:118.298 seconds] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:28:00.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 26 23:28:00.351: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:28:04.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3227" for this suite. Aug 26 23:28:42.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:28:42.787: INFO: namespace pods-3227 deletion completed in 38.269810059s • [SLOW TEST:42.530 seconds] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:28:42.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 26 23:28:42.898: INFO: Pod name rollover-pod: Found 0 pods out of 1 Aug 26 23:28:47.906: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 26 23:28:47.907: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Aug 26 23:28:49.915: INFO: Creating deployment "test-rollover-deployment" Aug 26 23:28:49.983: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Aug 26 23:28:51.997: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Aug 26 23:28:52.007: INFO: Ensure that both replica sets have 1 created replica Aug 26 23:28:52.015: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Aug 26 23:28:52.042: INFO: Updating deployment test-rollover-deployment Aug 26 23:28:52.042: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Aug 26 23:28:54.057: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Aug 26 23:28:54.069: INFO: Make sure deployment "test-rollover-deployment" is complete Aug 26 23:28:54.080: INFO: all replica sets need to contain the pod-template-hash label Aug 26 23:28:54.080: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081330, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081330, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081332, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081329, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 23:28:56.110: INFO: all replica sets need to contain the pod-template-hash label Aug 26 23:28:56.110: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081330, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081330, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081332, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081329, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 23:28:58.097: INFO: all replica sets need to contain the pod-template-hash label Aug 26 23:28:58.098: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081330, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081330, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081336, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081329, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 23:29:00.107: INFO: all replica sets need to contain the pod-template-hash label Aug 26 23:29:00.108: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081330, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081330, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081336, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081329, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 23:29:02.095: INFO: all replica sets need to contain the pod-template-hash label Aug 26 23:29:02.095: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081330, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081330, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081336, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081329, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 23:29:04.096: INFO: all replica sets need to contain the pod-template-hash label Aug 26 23:29:04.096: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081330, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081330, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081336, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081329, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 23:29:06.095: INFO: all replica sets need to contain the pod-template-hash label Aug 26 23:29:06.095: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081330, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081330, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081336, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081329, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 23:29:08.098: INFO: Aug 26 23:29:08.099: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Aug 26 23:29:08.114: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-7502,SelfLink:/apis/apps/v1/namespaces/deployment-7502/deployments/test-rollover-deployment,UID:8f676058-788f-4ad1-a948-4acfb8c1b877,ResourceVersion:3042930,Generation:2,CreationTimestamp:2020-08-26 23:28:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-08-26 23:28:50 +0000 UTC 2020-08-26 23:28:50 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-08-26 23:29:06 +0000 UTC 2020-08-26 23:28:49 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Aug 26 23:29:08.122: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-7502,SelfLink:/apis/apps/v1/namespaces/deployment-7502/replicasets/test-rollover-deployment-854595fc44,UID:30abcbe5-7176-49eb-8b1b-34d035854d9c,ResourceVersion:3042919,Generation:2,CreationTimestamp:2020-08-26 23:28:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 8f676058-788f-4ad1-a948-4acfb8c1b877 0x40023abec7 0x40023abec8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Aug 26 23:29:08.122: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Aug 26 23:29:08.123: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-7502,SelfLink:/apis/apps/v1/namespaces/deployment-7502/replicasets/test-rollover-controller,UID:209f9562-664f-41d9-9f70-b42837a526d3,ResourceVersion:3042929,Generation:2,CreationTimestamp:2020-08-26 23:28:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 8f676058-788f-4ad1-a948-4acfb8c1b877 0x40023abdf7 0x40023abdf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Aug 26 23:29:08.124: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-7502,SelfLink:/apis/apps/v1/namespaces/deployment-7502/replicasets/test-rollover-deployment-9b8b997cf,UID:e41d8b47-f718-4e7a-98e8-c01e90257591,ResourceVersion:3042877,Generation:2,CreationTimestamp:2020-08-26 23:28:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 8f676058-788f-4ad1-a948-4acfb8c1b877 0x40023abf90 0x40023abf91}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Aug 26 23:29:08.130: INFO: Pod "test-rollover-deployment-854595fc44-sqjc5" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-sqjc5,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-7502,SelfLink:/api/v1/namespaces/deployment-7502/pods/test-rollover-deployment-854595fc44-sqjc5,UID:790f60a4-e5b0-4e0d-869b-9a9eb8ded551,ResourceVersion:3042896,Generation:0,CreationTimestamp:2020-08-26 23:28:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 30abcbe5-7176-49eb-8b1b-34d035854d9c 0x4000c98f17 0x4000c98f18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mnhn6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mnhn6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-mnhn6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x4000c98f90} {node.kubernetes.io/unreachable Exists NoExecute 0x4000c98fb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:28:52 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:28:56 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:28:56 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:28:52 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.79,StartTime:2020-08-26 23:28:52 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-08-26 23:28:55 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://b5ec2abb6a5176694f5bd28b13e81bc8a49e394a4cd7202437dff99a57c9ac73}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:29:08.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7502" for this suite. Aug 26 23:29:14.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:29:14.339: INFO: namespace deployment-7502 deletion completed in 6.202473786s • [SLOW TEST:31.550 seconds] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:29:14.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Aug 26 23:29:14.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Aug 26 23:29:15.714: INFO: stderr: "" Aug 26 23:29:15.714: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:29:15.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2932" for this suite. Aug 26 23:29:21.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:29:21.972: INFO: namespace kubectl-2932 deletion completed in 6.247670567s • [SLOW TEST:7.628 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:29:21.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-b921810f-e983-43ad-91c1-965068c5b81c STEP: Creating configMap with name cm-test-opt-upd-c6573f23-ec77-43c0-a2f0-b49161aaf2ec STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-b921810f-e983-43ad-91c1-965068c5b81c STEP: Updating configmap cm-test-opt-upd-c6573f23-ec77-43c0-a2f0-b49161aaf2ec STEP: Creating configMap with name cm-test-opt-create-dc36ec00-ec2f-4a7c-9645-0061cf3b34dd STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:30:57.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9770" for this suite. Aug 26 23:31:19.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:31:20.219: INFO: namespace projected-9770 deletion completed in 22.539155035s • [SLOW TEST:118.246 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:31:20.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 26 23:31:20.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Aug 26 23:31:21.551: INFO: stderr: "" Aug 26 23:31:21.551: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.12\", GitCommit:\"e2a822d9f3c2fdb5c9bfbe64313cf9f657f0a725\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T05:17:59Z\", GoVersion:\"go1.12.17\", Compiler:\"gc\", Platform:\"linux/arm64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.12\", GitCommit:\"e2a822d9f3c2fdb5c9bfbe64313cf9f657f0a725\", GitTreeState:\"clean\", BuildDate:\"2020-07-19T21:08:45Z\", GoVersion:\"go1.12.17\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:31:21.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6285" for this suite. Aug 26 23:31:27.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:31:27.819: INFO: namespace kubectl-6285 deletion completed in 6.258269217s • [SLOW TEST:7.596 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:31:27.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Aug 26 23:31:28.065: INFO: PodSpec: initContainers in spec.initContainers Aug 26 23:32:25.902: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-53ba3f9c-ac1c-48ba-bc7b-7d2503e1d3fb", GenerateName:"", Namespace:"init-container-9972", SelfLink:"/api/v1/namespaces/init-container-9972/pods/pod-init-53ba3f9c-ac1c-48ba-bc7b-7d2503e1d3fb", UID:"3c1816a1-cddb-4328-88c7-1457255924fc", ResourceVersion:"3043442", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63734081488, loc:(*time.Location)(0x792fa60)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"64496841"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-s5hkc", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0x4002a5ae00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-s5hkc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-s5hkc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-s5hkc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4003086508), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4002f6afc0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x4003086590)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x40030865b0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0x40030865b8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0x40030865bc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081488, loc:(*time.Location)(0x792fa60)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081488, loc:(*time.Location)(0x792fa60)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081488, loc:(*time.Location)(0x792fa60)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081488, loc:(*time.Location)(0x792fa60)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.5", PodIP:"10.244.2.80", StartTime:(*v1.Time)(0x40026d9d00), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0x4002a60150)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0x4002a601c0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://aeed9fc2596e28a230e702aea09a7efa547f28c58d3596812d9aadf0c46dd49d"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x40026d9d40), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x40026d9d20), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:32:25.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9972" for this suite. Aug 26 23:32:47.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:32:48.101: INFO: namespace init-container-9972 deletion completed in 22.162143965s • [SLOW TEST:80.279 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:32:48.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9065.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9065.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9065.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9065.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 26 23:32:56.282: INFO: DNS probes using dns-test-dfa5a30c-580a-4c93-b30f-9a68b054839b succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9065.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9065.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9065.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9065.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 26 23:33:04.442: INFO: File wheezy_udp@dns-test-service-3.dns-9065.svc.cluster.local from pod dns-9065/dns-test-5933827a-22df-4b6b-a9b9-e8d31f06057e contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 26 23:33:04.447: INFO: File jessie_udp@dns-test-service-3.dns-9065.svc.cluster.local from pod dns-9065/dns-test-5933827a-22df-4b6b-a9b9-e8d31f06057e contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 26 23:33:04.447: INFO: Lookups using dns-9065/dns-test-5933827a-22df-4b6b-a9b9-e8d31f06057e failed for: [wheezy_udp@dns-test-service-3.dns-9065.svc.cluster.local jessie_udp@dns-test-service-3.dns-9065.svc.cluster.local] Aug 26 23:33:09.456: INFO: File wheezy_udp@dns-test-service-3.dns-9065.svc.cluster.local from pod dns-9065/dns-test-5933827a-22df-4b6b-a9b9-e8d31f06057e contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 26 23:33:09.461: INFO: File jessie_udp@dns-test-service-3.dns-9065.svc.cluster.local from pod dns-9065/dns-test-5933827a-22df-4b6b-a9b9-e8d31f06057e contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 26 23:33:09.461: INFO: Lookups using dns-9065/dns-test-5933827a-22df-4b6b-a9b9-e8d31f06057e failed for: [wheezy_udp@dns-test-service-3.dns-9065.svc.cluster.local jessie_udp@dns-test-service-3.dns-9065.svc.cluster.local] Aug 26 23:33:14.453: INFO: File wheezy_udp@dns-test-service-3.dns-9065.svc.cluster.local from pod dns-9065/dns-test-5933827a-22df-4b6b-a9b9-e8d31f06057e contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 26 23:33:14.458: INFO: File jessie_udp@dns-test-service-3.dns-9065.svc.cluster.local from pod dns-9065/dns-test-5933827a-22df-4b6b-a9b9-e8d31f06057e contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 26 23:33:14.458: INFO: Lookups using dns-9065/dns-test-5933827a-22df-4b6b-a9b9-e8d31f06057e failed for: [wheezy_udp@dns-test-service-3.dns-9065.svc.cluster.local jessie_udp@dns-test-service-3.dns-9065.svc.cluster.local] Aug 26 23:33:19.454: INFO: File wheezy_udp@dns-test-service-3.dns-9065.svc.cluster.local from pod dns-9065/dns-test-5933827a-22df-4b6b-a9b9-e8d31f06057e contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 26 23:33:19.458: INFO: File jessie_udp@dns-test-service-3.dns-9065.svc.cluster.local from pod dns-9065/dns-test-5933827a-22df-4b6b-a9b9-e8d31f06057e contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 26 23:33:19.458: INFO: Lookups using dns-9065/dns-test-5933827a-22df-4b6b-a9b9-e8d31f06057e failed for: [wheezy_udp@dns-test-service-3.dns-9065.svc.cluster.local jessie_udp@dns-test-service-3.dns-9065.svc.cluster.local] Aug 26 23:33:24.496: INFO: File jessie_udp@dns-test-service-3.dns-9065.svc.cluster.local from pod dns-9065/dns-test-5933827a-22df-4b6b-a9b9-e8d31f06057e contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 26 23:33:24.496: INFO: Lookups using dns-9065/dns-test-5933827a-22df-4b6b-a9b9-e8d31f06057e failed for: [jessie_udp@dns-test-service-3.dns-9065.svc.cluster.local] Aug 26 23:33:29.831: INFO: File jessie_udp@dns-test-service-3.dns-9065.svc.cluster.local from pod dns-9065/dns-test-5933827a-22df-4b6b-a9b9-e8d31f06057e contains '' instead of 'bar.example.com.' Aug 26 23:33:29.831: INFO: Lookups using dns-9065/dns-test-5933827a-22df-4b6b-a9b9-e8d31f06057e failed for: [jessie_udp@dns-test-service-3.dns-9065.svc.cluster.local] Aug 26 23:33:34.474: INFO: DNS probes using dns-test-5933827a-22df-4b6b-a9b9-e8d31f06057e succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9065.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-9065.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9065.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-9065.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 26 23:33:47.719: INFO: File jessie_udp@dns-test-service-3.dns-9065.svc.cluster.local from pod dns-9065/dns-test-256f629e-e127-4a0b-8735-02c69f4d8170 contains '' instead of '10.107.191.220' Aug 26 23:33:47.720: INFO: Lookups using dns-9065/dns-test-256f629e-e127-4a0b-8735-02c69f4d8170 failed for: [jessie_udp@dns-test-service-3.dns-9065.svc.cluster.local] Aug 26 23:33:52.735: INFO: DNS probes using dns-test-256f629e-e127-4a0b-8735-02c69f4d8170 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:33:53.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9065" for this suite. Aug 26 23:34:01.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:34:01.655: INFO: namespace dns-9065 deletion completed in 8.173449365s • [SLOW TEST:73.550 seconds] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:34:01.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Aug 26 23:34:01.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-4778' Aug 26 23:34:07.339: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Aug 26 23:34:07.339: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Aug 26 23:34:09.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-4778' Aug 26 23:34:10.694: INFO: stderr: "" Aug 26 23:34:10.694: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:34:10.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4778" for this suite. Aug 26 23:35:58.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:35:58.944: INFO: namespace kubectl-4778 deletion completed in 1m48.242870035s • [SLOW TEST:117.286 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:35:58.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-9c6dfdd9-3931-488a-82dc-77afd6f428b4 STEP: Creating a pod to test consume secrets Aug 26 23:35:59.046: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1656b69d-2b2f-48f2-9f6d-53468cba3d83" in namespace "projected-691" to be "success or failure" Aug 26 23:35:59.053: INFO: Pod "pod-projected-secrets-1656b69d-2b2f-48f2-9f6d-53468cba3d83": Phase="Pending", Reason="", readiness=false. Elapsed: 7.054524ms Aug 26 23:36:01.067: INFO: Pod "pod-projected-secrets-1656b69d-2b2f-48f2-9f6d-53468cba3d83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02111364s Aug 26 23:36:03.097: INFO: Pod "pod-projected-secrets-1656b69d-2b2f-48f2-9f6d-53468cba3d83": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051668769s Aug 26 23:36:05.104: INFO: Pod "pod-projected-secrets-1656b69d-2b2f-48f2-9f6d-53468cba3d83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.057941127s STEP: Saw pod success Aug 26 23:36:05.104: INFO: Pod "pod-projected-secrets-1656b69d-2b2f-48f2-9f6d-53468cba3d83" satisfied condition "success or failure" Aug 26 23:36:05.109: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-1656b69d-2b2f-48f2-9f6d-53468cba3d83 container projected-secret-volume-test: STEP: delete the pod Aug 26 23:36:05.212: INFO: Waiting for pod pod-projected-secrets-1656b69d-2b2f-48f2-9f6d-53468cba3d83 to disappear Aug 26 23:36:05.378: INFO: Pod pod-projected-secrets-1656b69d-2b2f-48f2-9f6d-53468cba3d83 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:36:05.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-691" for this suite. Aug 26 23:36:11.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:36:11.665: INFO: namespace projected-691 deletion completed in 6.27965061s • [SLOW TEST:12.720 seconds] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:36:11.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Aug 26 23:36:16.596: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9152 pod-service-account-1cae7a2a-e094-4c38-b411-8a5a5c6cfc06 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Aug 26 23:36:18.064: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9152 pod-service-account-1cae7a2a-e094-4c38-b411-8a5a5c6cfc06 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Aug 26 23:36:19.494: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9152 pod-service-account-1cae7a2a-e094-4c38-b411-8a5a5c6cfc06 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 26 23:36:21.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9152" for this suite. Aug 26 23:36:27.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 26 23:36:27.196: INFO: namespace svcaccounts-9152 deletion completed in 6.169115343s • [SLOW TEST:15.528 seconds] [sig-auth] ServiceAccounts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 26 23:36:27.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 26 23:36:27.334: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-e210fe9d-65fd-4d67-86f4-bdb4952b151a
STEP: Creating a pod to test consume configMaps
Aug 26 23:36:34.166: INFO: Waiting up to 5m0s for pod "pod-configmaps-c509a7ca-51da-4e9e-aca5-3daa3b4a38b6" in namespace "configmap-7655" to be "success or failure"
Aug 26 23:36:34.295: INFO: Pod "pod-configmaps-c509a7ca-51da-4e9e-aca5-3daa3b4a38b6": Phase="Pending", Reason="", readiness=false. Elapsed: 128.999373ms
Aug 26 23:36:36.333: INFO: Pod "pod-configmaps-c509a7ca-51da-4e9e-aca5-3daa3b4a38b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.16731133s
Aug 26 23:36:38.340: INFO: Pod "pod-configmaps-c509a7ca-51da-4e9e-aca5-3daa3b4a38b6": Phase="Running", Reason="", readiness=true. Elapsed: 4.173803114s
Aug 26 23:36:40.346: INFO: Pod "pod-configmaps-c509a7ca-51da-4e9e-aca5-3daa3b4a38b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.180329475s
STEP: Saw pod success
Aug 26 23:36:40.346: INFO: Pod "pod-configmaps-c509a7ca-51da-4e9e-aca5-3daa3b4a38b6" satisfied condition "success or failure"
Aug 26 23:36:40.350: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-c509a7ca-51da-4e9e-aca5-3daa3b4a38b6 container configmap-volume-test: 
STEP: delete the pod
Aug 26 23:36:40.415: INFO: Waiting for pod pod-configmaps-c509a7ca-51da-4e9e-aca5-3daa3b4a38b6 to disappear
Aug 26 23:36:40.418: INFO: Pod pod-configmaps-c509a7ca-51da-4e9e-aca5-3daa3b4a38b6 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 26 23:36:40.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7655" for this suite.
Aug 26 23:36:46.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 26 23:36:46.595: INFO: namespace configmap-7655 deletion completed in 6.16980471s

• [SLOW TEST:12.987 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 26 23:36:46.597: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Aug 26 23:36:52.998: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Aug 26 23:37:04.253: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 26 23:37:04.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8555" for this suite.
Aug 26 23:37:16.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 26 23:37:17.179: INFO: namespace pods-8555 deletion completed in 12.909080004s

• [SLOW TEST:30.583 seconds]
[k8s.io] [sig-node] Pods Extended
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 26 23:37:17.181: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-88cec077-b0a2-4a1c-94a5-a8f55c0bc5f5
STEP: Creating a pod to test consume configMaps
Aug 26 23:37:18.773: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-954a4612-36aa-4bb0-baa1-be010051a5bb" in namespace "projected-7383" to be "success or failure"
Aug 26 23:37:19.177: INFO: Pod "pod-projected-configmaps-954a4612-36aa-4bb0-baa1-be010051a5bb": Phase="Pending", Reason="", readiness=false. Elapsed: 403.904135ms
Aug 26 23:37:21.496: INFO: Pod "pod-projected-configmaps-954a4612-36aa-4bb0-baa1-be010051a5bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.722208226s
Aug 26 23:37:23.503: INFO: Pod "pod-projected-configmaps-954a4612-36aa-4bb0-baa1-be010051a5bb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.729339017s
Aug 26 23:37:25.510: INFO: Pod "pod-projected-configmaps-954a4612-36aa-4bb0-baa1-be010051a5bb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.736040292s
Aug 26 23:37:27.583: INFO: Pod "pod-projected-configmaps-954a4612-36aa-4bb0-baa1-be010051a5bb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.809805273s
Aug 26 23:37:29.589: INFO: Pod "pod-projected-configmaps-954a4612-36aa-4bb0-baa1-be010051a5bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.81590665s
STEP: Saw pod success
Aug 26 23:37:29.590: INFO: Pod "pod-projected-configmaps-954a4612-36aa-4bb0-baa1-be010051a5bb" satisfied condition "success or failure"
Aug 26 23:37:29.644: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-954a4612-36aa-4bb0-baa1-be010051a5bb container projected-configmap-volume-test: 
STEP: delete the pod
Aug 26 23:37:29.662: INFO: Waiting for pod pod-projected-configmaps-954a4612-36aa-4bb0-baa1-be010051a5bb to disappear
Aug 26 23:37:29.667: INFO: Pod pod-projected-configmaps-954a4612-36aa-4bb0-baa1-be010051a5bb no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 26 23:37:29.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7383" for this suite.
Aug 26 23:37:37.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 26 23:37:37.818: INFO: namespace projected-7383 deletion completed in 8.144371896s

• [SLOW TEST:20.637 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 26 23:37:37.819: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 26 23:37:37.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-2844'
Aug 26 23:37:39.218: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 26 23:37:39.218: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: rolling-update to same image controller
Aug 26 23:37:39.240: INFO: scanned /root for discovery docs: 
Aug 26 23:37:39.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-2844'
Aug 26 23:38:00.153: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Aug 26 23:38:00.153: INFO: stdout: "Created e2e-test-nginx-rc-2d9b2a087a6deee85834ff6fdc4ee1bb\nScaling up e2e-test-nginx-rc-2d9b2a087a6deee85834ff6fdc4ee1bb from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-2d9b2a087a6deee85834ff6fdc4ee1bb up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-2d9b2a087a6deee85834ff6fdc4ee1bb to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Aug 26 23:38:00.154: INFO: stdout: "Created e2e-test-nginx-rc-2d9b2a087a6deee85834ff6fdc4ee1bb\nScaling up e2e-test-nginx-rc-2d9b2a087a6deee85834ff6fdc4ee1bb from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-2d9b2a087a6deee85834ff6fdc4ee1bb up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-2d9b2a087a6deee85834ff6fdc4ee1bb to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Aug 26 23:38:00.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2844'
Aug 26 23:38:01.463: INFO: stderr: ""
Aug 26 23:38:01.463: INFO: stdout: "e2e-test-nginx-rc-2d9b2a087a6deee85834ff6fdc4ee1bb-2lxk9 "
Aug 26 23:38:01.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-2d9b2a087a6deee85834ff6fdc4ee1bb-2lxk9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2844'
Aug 26 23:38:02.722: INFO: stderr: ""
Aug 26 23:38:02.723: INFO: stdout: "true"
Aug 26 23:38:02.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-2d9b2a087a6deee85834ff6fdc4ee1bb-2lxk9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2844'
Aug 26 23:38:03.999: INFO: stderr: ""
Aug 26 23:38:03.999: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Aug 26 23:38:04.000: INFO: e2e-test-nginx-rc-2d9b2a087a6deee85834ff6fdc4ee1bb-2lxk9 is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Aug 26 23:38:04.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-2844'
Aug 26 23:38:05.361: INFO: stderr: ""
Aug 26 23:38:05.361: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 26 23:38:05.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2844" for this suite.
Aug 26 23:38:27.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 26 23:38:27.560: INFO: namespace kubectl-2844 deletion completed in 22.190972649s

• [SLOW TEST:49.741 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 26 23:38:27.561: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug 26 23:38:42.058: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 26 23:38:42.195: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 26 23:38:44.196: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 26 23:38:44.203: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 26 23:38:46.196: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 26 23:38:46.335: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 26 23:38:48.196: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 26 23:38:48.202: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 26 23:38:50.196: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 26 23:38:50.202: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 26 23:38:52.196: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 26 23:38:52.204: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 26 23:38:54.196: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 26 23:38:54.203: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 26 23:38:56.196: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 26 23:38:56.203: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 26 23:38:58.196: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 26 23:38:58.208: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 26 23:39:00.196: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 26 23:39:00.506: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 26 23:39:02.196: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 26 23:39:02.203: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 26 23:39:04.196: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 26 23:39:04.419: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 26 23:39:04.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2855" for this suite.
Aug 26 23:39:26.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 26 23:39:26.905: INFO: namespace container-lifecycle-hook-2855 deletion completed in 22.466733582s

• [SLOW TEST:59.344 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 26 23:39:26.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Aug 26 23:39:27.537: INFO: Pod name pod-release: Found 0 pods out of 1
Aug 26 23:39:32.838: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 26 23:39:33.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-51" for this suite.
Aug 26 23:39:44.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 26 23:39:44.909: INFO: namespace replication-controller-51 deletion completed in 10.815046615s

• [SLOW TEST:18.000 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 26 23:39:44.912: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-949c4272-9999-4e68-9cab-0249080c904c in namespace container-probe-965
Aug 26 23:39:49.540: INFO: Started pod busybox-949c4272-9999-4e68-9cab-0249080c904c in namespace container-probe-965
STEP: checking the pod's current state and verifying that restartCount is present
Aug 26 23:39:49.544: INFO: Initial restart count of pod busybox-949c4272-9999-4e68-9cab-0249080c904c is 0
Aug 26 23:40:37.976: INFO: Restart count of pod container-probe-965/busybox-949c4272-9999-4e68-9cab-0249080c904c is now 1 (48.432255769s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 26 23:40:37.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-965" for this suite.
Aug 26 23:40:46.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 26 23:40:46.276: INFO: namespace container-probe-965 deletion completed in 8.254798307s

• [SLOW TEST:61.364 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 26 23:40:46.277: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 26 23:40:46.428: INFO: Waiting up to 5m0s for pod "downwardapi-volume-31312773-3b97-4c08-bd57-869e3543d3bf" in namespace "downward-api-3503" to be "success or failure"
Aug 26 23:40:46.458: INFO: Pod "downwardapi-volume-31312773-3b97-4c08-bd57-869e3543d3bf": Phase="Pending", Reason="", readiness=false. Elapsed: 29.470276ms
Aug 26 23:40:48.465: INFO: Pod "downwardapi-volume-31312773-3b97-4c08-bd57-869e3543d3bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036881564s
Aug 26 23:40:50.528: INFO: Pod "downwardapi-volume-31312773-3b97-4c08-bd57-869e3543d3bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.099664475s
STEP: Saw pod success
Aug 26 23:40:50.528: INFO: Pod "downwardapi-volume-31312773-3b97-4c08-bd57-869e3543d3bf" satisfied condition "success or failure"
Aug 26 23:40:50.547: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-31312773-3b97-4c08-bd57-869e3543d3bf container client-container: 
STEP: delete the pod
Aug 26 23:40:51.079: INFO: Waiting for pod downwardapi-volume-31312773-3b97-4c08-bd57-869e3543d3bf to disappear
Aug 26 23:40:51.085: INFO: Pod downwardapi-volume-31312773-3b97-4c08-bd57-869e3543d3bf no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 26 23:40:51.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3503" for this suite.
Aug 26 23:40:57.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 26 23:40:57.251: INFO: namespace downward-api-3503 deletion completed in 6.156875491s

• [SLOW TEST:10.974 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 26 23:40:57.252: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-8nqm
STEP: Creating a pod to test atomic-volume-subpath
Aug 26 23:40:57.533: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-8nqm" in namespace "subpath-5832" to be "success or failure"
Aug 26 23:40:57.543: INFO: Pod "pod-subpath-test-secret-8nqm": Phase="Pending", Reason="", readiness=false. Elapsed: 9.331985ms
Aug 26 23:40:59.548: INFO: Pod "pod-subpath-test-secret-8nqm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014483659s
Aug 26 23:41:01.554: INFO: Pod "pod-subpath-test-secret-8nqm": Phase="Running", Reason="", readiness=true. Elapsed: 4.021005568s
Aug 26 23:41:03.562: INFO: Pod "pod-subpath-test-secret-8nqm": Phase="Running", Reason="", readiness=true. Elapsed: 6.028522142s
Aug 26 23:41:05.592: INFO: Pod "pod-subpath-test-secret-8nqm": Phase="Running", Reason="", readiness=true. Elapsed: 8.05861063s
Aug 26 23:41:07.600: INFO: Pod "pod-subpath-test-secret-8nqm": Phase="Running", Reason="", readiness=true. Elapsed: 10.066474356s
Aug 26 23:41:09.608: INFO: Pod "pod-subpath-test-secret-8nqm": Phase="Running", Reason="", readiness=true. Elapsed: 12.074441913s
Aug 26 23:41:11.615: INFO: Pod "pod-subpath-test-secret-8nqm": Phase="Running", Reason="", readiness=true. Elapsed: 14.081600272s
Aug 26 23:41:13.622: INFO: Pod "pod-subpath-test-secret-8nqm": Phase="Running", Reason="", readiness=true. Elapsed: 16.089106928s
Aug 26 23:41:15.631: INFO: Pod "pod-subpath-test-secret-8nqm": Phase="Running", Reason="", readiness=true. Elapsed: 18.097252897s
Aug 26 23:41:17.638: INFO: Pod "pod-subpath-test-secret-8nqm": Phase="Running", Reason="", readiness=true. Elapsed: 20.104619708s
Aug 26 23:41:19.645: INFO: Pod "pod-subpath-test-secret-8nqm": Phase="Running", Reason="", readiness=true. Elapsed: 22.111351815s
Aug 26 23:41:21.652: INFO: Pod "pod-subpath-test-secret-8nqm": Phase="Running", Reason="", readiness=true. Elapsed: 24.118841923s
Aug 26 23:41:23.658: INFO: Pod "pod-subpath-test-secret-8nqm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.124627483s
STEP: Saw pod success
Aug 26 23:41:23.658: INFO: Pod "pod-subpath-test-secret-8nqm" satisfied condition "success or failure"
Aug 26 23:41:23.662: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-secret-8nqm container test-container-subpath-secret-8nqm: 
STEP: delete the pod
Aug 26 23:41:23.689: INFO: Waiting for pod pod-subpath-test-secret-8nqm to disappear
Aug 26 23:41:23.737: INFO: Pod pod-subpath-test-secret-8nqm no longer exists
STEP: Deleting pod pod-subpath-test-secret-8nqm
Aug 26 23:41:23.738: INFO: Deleting pod "pod-subpath-test-secret-8nqm" in namespace "subpath-5832"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 26 23:41:23.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5832" for this suite.
Aug 26 23:41:31.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 26 23:41:31.929: INFO: namespace subpath-5832 deletion completed in 8.175361667s

• [SLOW TEST:34.677 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 26 23:41:31.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug 26 23:41:32.029: INFO: Waiting up to 5m0s for pod "pod-9bebdb6e-44a9-46e3-9d82-82bd888cc60a" in namespace "emptydir-5142" to be "success or failure"
Aug 26 23:41:32.035: INFO: Pod "pod-9bebdb6e-44a9-46e3-9d82-82bd888cc60a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.915776ms
Aug 26 23:41:34.044: INFO: Pod "pod-9bebdb6e-44a9-46e3-9d82-82bd888cc60a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014409368s
Aug 26 23:41:36.053: INFO: Pod "pod-9bebdb6e-44a9-46e3-9d82-82bd888cc60a": Phase="Running", Reason="", readiness=true. Elapsed: 4.02308418s
Aug 26 23:41:38.060: INFO: Pod "pod-9bebdb6e-44a9-46e3-9d82-82bd888cc60a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030033675s
STEP: Saw pod success
Aug 26 23:41:38.060: INFO: Pod "pod-9bebdb6e-44a9-46e3-9d82-82bd888cc60a" satisfied condition "success or failure"
Aug 26 23:41:38.064: INFO: Trying to get logs from node iruya-worker pod pod-9bebdb6e-44a9-46e3-9d82-82bd888cc60a container test-container: 
STEP: delete the pod
Aug 26 23:41:38.101: INFO: Waiting for pod pod-9bebdb6e-44a9-46e3-9d82-82bd888cc60a to disappear
Aug 26 23:41:38.131: INFO: Pod pod-9bebdb6e-44a9-46e3-9d82-82bd888cc60a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 26 23:41:38.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5142" for this suite.
Aug 26 23:41:46.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 26 23:41:46.401: INFO: namespace emptydir-5142 deletion completed in 8.258095681s

• [SLOW TEST:14.470 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 26 23:41:46.406: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Aug 26 23:41:46.508: INFO: Waiting up to 5m0s for pod "client-containers-640789ec-2db3-47f6-93db-59d991d4280b" in namespace "containers-5098" to be "success or failure"
Aug 26 23:41:46.792: INFO: Pod "client-containers-640789ec-2db3-47f6-93db-59d991d4280b": Phase="Pending", Reason="", readiness=false. Elapsed: 283.193351ms
Aug 26 23:41:49.056: INFO: Pod "client-containers-640789ec-2db3-47f6-93db-59d991d4280b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.547353105s
Aug 26 23:41:51.259: INFO: Pod "client-containers-640789ec-2db3-47f6-93db-59d991d4280b": Phase="Running", Reason="", readiness=true. Elapsed: 4.750130827s
Aug 26 23:41:53.266: INFO: Pod "client-containers-640789ec-2db3-47f6-93db-59d991d4280b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.757846393s
STEP: Saw pod success
Aug 26 23:41:53.267: INFO: Pod "client-containers-640789ec-2db3-47f6-93db-59d991d4280b" satisfied condition "success or failure"
Aug 26 23:41:53.272: INFO: Trying to get logs from node iruya-worker2 pod client-containers-640789ec-2db3-47f6-93db-59d991d4280b container test-container: 
STEP: delete the pod
Aug 26 23:41:53.299: INFO: Waiting for pod client-containers-640789ec-2db3-47f6-93db-59d991d4280b to disappear
Aug 26 23:41:53.303: INFO: Pod client-containers-640789ec-2db3-47f6-93db-59d991d4280b no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 26 23:41:53.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-5098" for this suite.
Aug 26 23:41:59.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 26 23:41:59.465: INFO: namespace containers-5098 deletion completed in 6.15378236s

• [SLOW TEST:13.059 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 26 23:41:59.466: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 26 23:42:04.126: INFO: Successfully updated pod "pod-update-ce45de24-0f21-4274-9327-b40c3ab11e2a"
STEP: verifying the updated pod is in kubernetes
Aug 26 23:42:04.177: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 26 23:42:04.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8381" for this suite.
Aug 26 23:42:26.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 26 23:42:26.325: INFO: namespace pods-8381 deletion completed in 22.137739757s

• [SLOW TEST:26.859 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 26 23:42:26.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-628
I0826 23:42:26.466082       7 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-628, replica count: 1
I0826 23:42:27.519171       7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 23:42:28.522994       7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 23:42:29.523919       7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 23:42:30.525724       7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 23:42:31.527500       7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 26 23:42:31.949: INFO: Created: latency-svc-5vpz2
Aug 26 23:42:31.982: INFO: Got endpoints: latency-svc-5vpz2 [351.915474ms]
Aug 26 23:42:32.130: INFO: Created: latency-svc-f4zm9
Aug 26 23:42:32.144: INFO: Got endpoints: latency-svc-f4zm9 [161.275919ms]
Aug 26 23:42:32.286: INFO: Created: latency-svc-bfwt5
Aug 26 23:42:32.421: INFO: Got endpoints: latency-svc-bfwt5 [437.756961ms]
Aug 26 23:42:32.456: INFO: Created: latency-svc-hwt5p
Aug 26 23:42:32.487: INFO: Got endpoints: latency-svc-hwt5p [502.591686ms]
Aug 26 23:42:32.558: INFO: Created: latency-svc-zzkrz
Aug 26 23:42:32.574: INFO: Got endpoints: latency-svc-zzkrz [590.29621ms]
Aug 26 23:42:32.616: INFO: Created: latency-svc-sg6xx
Aug 26 23:42:32.640: INFO: Got endpoints: latency-svc-sg6xx [656.586612ms]
Aug 26 23:42:32.721: INFO: Created: latency-svc-qhmrm
Aug 26 23:42:32.723: INFO: Got endpoints: latency-svc-qhmrm [739.893378ms]
Aug 26 23:42:32.779: INFO: Created: latency-svc-rxhxd
Aug 26 23:42:32.869: INFO: Got endpoints: latency-svc-rxhxd [885.990789ms]
Aug 26 23:42:32.919: INFO: Created: latency-svc-7j7h2
Aug 26 23:42:32.949: INFO: Got endpoints: latency-svc-7j7h2 [965.594375ms]
Aug 26 23:42:33.002: INFO: Created: latency-svc-fhfqb
Aug 26 23:42:33.018: INFO: Got endpoints: latency-svc-fhfqb [1.034349188s]
Aug 26 23:42:33.055: INFO: Created: latency-svc-cq2qz
Aug 26 23:42:33.063: INFO: Got endpoints: latency-svc-cq2qz [1.07832071s]
Aug 26 23:42:33.088: INFO: Created: latency-svc-xkkmq
Aug 26 23:42:33.127: INFO: Got endpoints: latency-svc-xkkmq [1.141184933s]
Aug 26 23:42:33.141: INFO: Created: latency-svc-h6b5l
Aug 26 23:42:33.159: INFO: Got endpoints: latency-svc-h6b5l [1.1736893s]
Aug 26 23:42:33.184: INFO: Created: latency-svc-qr8jh
Aug 26 23:42:33.202: INFO: Got endpoints: latency-svc-qr8jh [1.21529085s]
Aug 26 23:42:33.301: INFO: Created: latency-svc-87k7f
Aug 26 23:42:33.304: INFO: Got endpoints: latency-svc-87k7f [1.316954619s]
Aug 26 23:42:33.352: INFO: Created: latency-svc-w95gg
Aug 26 23:42:33.371: INFO: Got endpoints: latency-svc-w95gg [1.387834797s]
Aug 26 23:42:33.396: INFO: Created: latency-svc-gjm8v
Aug 26 23:42:33.450: INFO: Got endpoints: latency-svc-gjm8v [1.305214774s]
Aug 26 23:42:33.471: INFO: Created: latency-svc-twq6b
Aug 26 23:42:33.490: INFO: Got endpoints: latency-svc-twq6b [1.068853282s]
Aug 26 23:42:33.538: INFO: Created: latency-svc-dtr7c
Aug 26 23:42:33.545: INFO: Got endpoints: latency-svc-dtr7c [1.057661294s]
Aug 26 23:42:33.667: INFO: Created: latency-svc-b8zcx
Aug 26 23:42:33.670: INFO: Got endpoints: latency-svc-b8zcx [1.095398678s]
Aug 26 23:42:34.236: INFO: Created: latency-svc-dwxs5
Aug 26 23:42:34.246: INFO: Got endpoints: latency-svc-dwxs5 [1.606538026s]
Aug 26 23:42:34.321: INFO: Created: latency-svc-m6cv7
Aug 26 23:42:34.378: INFO: Got endpoints: latency-svc-m6cv7 [1.654607701s]
Aug 26 23:42:34.410: INFO: Created: latency-svc-dk8w7
Aug 26 23:42:34.420: INFO: Got endpoints: latency-svc-dk8w7 [1.550884939s]
Aug 26 23:42:34.453: INFO: Created: latency-svc-pnhhq
Aug 26 23:42:34.475: INFO: Got endpoints: latency-svc-pnhhq [1.525238545s]
Aug 26 23:42:34.549: INFO: Created: latency-svc-jdngj
Aug 26 23:42:34.804: INFO: Got endpoints: latency-svc-jdngj [1.785440763s]
Aug 26 23:42:34.825: INFO: Created: latency-svc-s4jst
Aug 26 23:42:34.846: INFO: Got endpoints: latency-svc-s4jst [1.783264594s]
Aug 26 23:42:35.123: INFO: Created: latency-svc-ggvcm
Aug 26 23:42:35.350: INFO: Created: latency-svc-wwmvs
Aug 26 23:42:35.350: INFO: Got endpoints: latency-svc-ggvcm [2.223439834s]
Aug 26 23:42:35.370: INFO: Got endpoints: latency-svc-wwmvs [2.210329333s]
Aug 26 23:42:35.398: INFO: Created: latency-svc-xhn7b
Aug 26 23:42:35.404: INFO: Got endpoints: latency-svc-xhn7b [2.202303502s]
Aug 26 23:42:35.429: INFO: Created: latency-svc-hjxk8
Aug 26 23:42:35.447: INFO: Got endpoints: latency-svc-hjxk8 [2.142680447s]
Aug 26 23:42:35.547: INFO: Created: latency-svc-z4rf7
Aug 26 23:42:35.918: INFO: Got endpoints: latency-svc-z4rf7 [2.547098243s]
Aug 26 23:42:35.968: INFO: Created: latency-svc-465w7
Aug 26 23:42:35.986: INFO: Got endpoints: latency-svc-465w7 [2.536263445s]
Aug 26 23:42:36.086: INFO: Created: latency-svc-7m9ck
Aug 26 23:42:36.100: INFO: Got endpoints: latency-svc-7m9ck [2.609568183s]
Aug 26 23:42:36.149: INFO: Created: latency-svc-sx859
Aug 26 23:42:36.166: INFO: Got endpoints: latency-svc-sx859 [2.62096336s]
Aug 26 23:42:36.235: INFO: Created: latency-svc-wz6mr
Aug 26 23:42:36.238: INFO: Got endpoints: latency-svc-wz6mr [2.567937776s]
Aug 26 23:42:36.372: INFO: Created: latency-svc-qmmdj
Aug 26 23:42:36.375: INFO: Got endpoints: latency-svc-qmmdj [2.128745788s]
Aug 26 23:42:36.462: INFO: Created: latency-svc-86q4r
Aug 26 23:42:36.558: INFO: Got endpoints: latency-svc-86q4r [2.179398779s]
Aug 26 23:42:36.563: INFO: Created: latency-svc-c4j6l
Aug 26 23:42:36.569: INFO: Got endpoints: latency-svc-c4j6l [2.148234361s]
Aug 26 23:42:36.645: INFO: Created: latency-svc-8gdvd
Aug 26 23:42:36.762: INFO: Got endpoints: latency-svc-8gdvd [2.286888631s]
Aug 26 23:42:36.770: INFO: Created: latency-svc-9njq6
Aug 26 23:42:36.827: INFO: Got endpoints: latency-svc-9njq6 [2.023376576s]
Aug 26 23:42:36.925: INFO: Created: latency-svc-jf5sz
Aug 26 23:42:36.960: INFO: Got endpoints: latency-svc-jf5sz [2.113097667s]
Aug 26 23:42:36.960: INFO: Created: latency-svc-rcz5z
Aug 26 23:42:36.983: INFO: Got endpoints: latency-svc-rcz5z [1.632647909s]
Aug 26 23:42:37.014: INFO: Created: latency-svc-8xt6f
Aug 26 23:42:37.062: INFO: Got endpoints: latency-svc-8xt6f [1.692114488s]
Aug 26 23:42:37.077: INFO: Created: latency-svc-kp92q
Aug 26 23:42:37.092: INFO: Got endpoints: latency-svc-kp92q [1.686905326s]
Aug 26 23:42:37.137: INFO: Created: latency-svc-6k9p8
Aug 26 23:42:37.151: INFO: Got endpoints: latency-svc-6k9p8 [1.704653575s]
Aug 26 23:42:37.229: INFO: Created: latency-svc-tlnsp
Aug 26 23:42:37.248: INFO: Got endpoints: latency-svc-tlnsp [1.329695148s]
Aug 26 23:42:37.290: INFO: Created: latency-svc-ndj69
Aug 26 23:42:37.589: INFO: Got endpoints: latency-svc-ndj69 [1.602207453s]
Aug 26 23:42:37.828: INFO: Created: latency-svc-czhsc
Aug 26 23:42:37.925: INFO: Got endpoints: latency-svc-czhsc [1.825653258s]
Aug 26 23:42:38.050: INFO: Created: latency-svc-m7dh2
Aug 26 23:42:38.269: INFO: Got endpoints: latency-svc-m7dh2 [2.102512751s]
Aug 26 23:42:38.714: INFO: Created: latency-svc-r6bsc
Aug 26 23:42:38.753: INFO: Got endpoints: latency-svc-r6bsc [2.514428935s]
Aug 26 23:42:38.966: INFO: Created: latency-svc-4c9t2
Aug 26 23:42:38.970: INFO: Got endpoints: latency-svc-4c9t2 [2.59422683s]
Aug 26 23:42:39.046: INFO: Created: latency-svc-4kg6x
Aug 26 23:42:39.064: INFO: Got endpoints: latency-svc-4kg6x [2.506397811s]
Aug 26 23:42:39.193: INFO: Created: latency-svc-s7jk2
Aug 26 23:42:39.208: INFO: Got endpoints: latency-svc-s7jk2 [2.639247825s]
Aug 26 23:42:39.271: INFO: Created: latency-svc-5rp5r
Aug 26 23:42:39.433: INFO: Got endpoints: latency-svc-5rp5r [2.670514119s]
Aug 26 23:42:39.437: INFO: Created: latency-svc-mjf4k
Aug 26 23:42:39.743: INFO: Got endpoints: latency-svc-mjf4k [2.915017324s]
Aug 26 23:42:39.942: INFO: Created: latency-svc-84vbz
Aug 26 23:42:39.965: INFO: Got endpoints: latency-svc-84vbz [3.004980062s]
Aug 26 23:42:40.127: INFO: Created: latency-svc-4p5bj
Aug 26 23:42:40.318: INFO: Got endpoints: latency-svc-4p5bj [3.334746096s]
Aug 26 23:42:40.361: INFO: Created: latency-svc-psxdb
Aug 26 23:42:40.378: INFO: Got endpoints: latency-svc-psxdb [3.315204002s]
Aug 26 23:42:40.511: INFO: Created: latency-svc-8z4hv
Aug 26 23:42:40.552: INFO: Got endpoints: latency-svc-8z4hv [3.459998436s]
Aug 26 23:42:40.607: INFO: Created: latency-svc-ch4tz
Aug 26 23:42:40.684: INFO: Got endpoints: latency-svc-ch4tz [3.532033552s]
Aug 26 23:42:40.705: INFO: Created: latency-svc-58sx5
Aug 26 23:42:40.738: INFO: Got endpoints: latency-svc-58sx5 [3.489649043s]
Aug 26 23:42:40.770: INFO: Created: latency-svc-kj6wz
Aug 26 23:42:40.914: INFO: Got endpoints: latency-svc-kj6wz [3.324352152s]
Aug 26 23:42:40.917: INFO: Created: latency-svc-dn29q
Aug 26 23:42:40.930: INFO: Got endpoints: latency-svc-dn29q [3.004436173s]
Aug 26 23:42:41.092: INFO: Created: latency-svc-8pxpd
Aug 26 23:42:41.094: INFO: Got endpoints: latency-svc-8pxpd [2.824807884s]
Aug 26 23:42:41.126: INFO: Created: latency-svc-zsdvb
Aug 26 23:42:41.141: INFO: Got endpoints: latency-svc-zsdvb [2.387913307s]
Aug 26 23:42:41.163: INFO: Created: latency-svc-268w8
Aug 26 23:42:41.177: INFO: Got endpoints: latency-svc-268w8 [2.206082013s]
Aug 26 23:42:41.259: INFO: Created: latency-svc-slz9k
Aug 26 23:42:41.262: INFO: Got endpoints: latency-svc-slz9k [2.196960505s]
Aug 26 23:42:41.290: INFO: Created: latency-svc-l2bv8
Aug 26 23:42:41.302: INFO: Got endpoints: latency-svc-l2bv8 [2.093927351s]
Aug 26 23:42:41.329: INFO: Created: latency-svc-fqnwh
Aug 26 23:42:41.339: INFO: Got endpoints: latency-svc-fqnwh [1.905822774s]
Aug 26 23:42:41.411: INFO: Created: latency-svc-w44k7
Aug 26 23:42:41.429: INFO: Got endpoints: latency-svc-w44k7 [1.68624754s]
Aug 26 23:42:41.466: INFO: Created: latency-svc-wdqhs
Aug 26 23:42:41.495: INFO: Got endpoints: latency-svc-wdqhs [1.529833668s]
Aug 26 23:42:41.540: INFO: Created: latency-svc-jnppp
Aug 26 23:42:41.556: INFO: Got endpoints: latency-svc-jnppp [1.237092062s]
Aug 26 23:42:41.576: INFO: Created: latency-svc-vmhsm
Aug 26 23:42:41.586: INFO: Got endpoints: latency-svc-vmhsm [1.207738226s]
Aug 26 23:42:41.606: INFO: Created: latency-svc-zhs2j
Aug 26 23:42:41.618: INFO: Got endpoints: latency-svc-zhs2j [1.066235507s]
Aug 26 23:42:41.678: INFO: Created: latency-svc-bksj5
Aug 26 23:42:41.683: INFO: Got endpoints: latency-svc-bksj5 [998.704233ms]
Aug 26 23:42:41.717: INFO: Created: latency-svc-vrpjf
Aug 26 23:42:41.730: INFO: Got endpoints: latency-svc-vrpjf [991.965668ms]
Aug 26 23:42:41.837: INFO: Created: latency-svc-jxj24
Aug 26 23:42:41.858: INFO: Got endpoints: latency-svc-jxj24 [943.89749ms]
Aug 26 23:42:41.903: INFO: Created: latency-svc-ncq4j
Aug 26 23:42:41.917: INFO: Got endpoints: latency-svc-ncq4j [986.821184ms]
Aug 26 23:42:41.972: INFO: Created: latency-svc-4xh24
Aug 26 23:42:41.974: INFO: Got endpoints: latency-svc-4xh24 [880.280281ms]
Aug 26 23:42:42.002: INFO: Created: latency-svc-4mbc4
Aug 26 23:42:42.020: INFO: Got endpoints: latency-svc-4mbc4 [878.488146ms]
Aug 26 23:42:42.045: INFO: Created: latency-svc-5v65z
Aug 26 23:42:42.056: INFO: Got endpoints: latency-svc-5v65z [879.177575ms]
Aug 26 23:42:42.115: INFO: Created: latency-svc-dpmn7
Aug 26 23:42:42.122: INFO: Got endpoints: latency-svc-dpmn7 [860.228658ms]
Aug 26 23:42:42.146: INFO: Created: latency-svc-v7zh6
Aug 26 23:42:42.165: INFO: Got endpoints: latency-svc-v7zh6 [861.911052ms]
Aug 26 23:42:42.211: INFO: Created: latency-svc-7kv6m
Aug 26 23:42:42.294: INFO: Got endpoints: latency-svc-7kv6m [955.344985ms]
Aug 26 23:42:42.327: INFO: Created: latency-svc-zn9jg
Aug 26 23:42:42.346: INFO: Got endpoints: latency-svc-zn9jg [916.450543ms]
Aug 26 23:42:42.438: INFO: Created: latency-svc-w929q
Aug 26 23:42:42.474: INFO: Got endpoints: latency-svc-w929q [978.685543ms]
Aug 26 23:42:42.474: INFO: Created: latency-svc-7knbr
Aug 26 23:42:42.490: INFO: Got endpoints: latency-svc-7knbr [933.992038ms]
Aug 26 23:42:42.510: INFO: Created: latency-svc-9wt74
Aug 26 23:42:42.526: INFO: Got endpoints: latency-svc-9wt74 [939.731204ms]
Aug 26 23:42:42.595: INFO: Created: latency-svc-nt7q9
Aug 26 23:42:42.602: INFO: Got endpoints: latency-svc-nt7q9 [983.666883ms]
Aug 26 23:42:42.655: INFO: Created: latency-svc-8p4sw
Aug 26 23:42:42.799: INFO: Got endpoints: latency-svc-8p4sw [1.116429863s]
Aug 26 23:42:42.805: INFO: Created: latency-svc-j62dj
Aug 26 23:42:42.851: INFO: Got endpoints: latency-svc-j62dj [1.120317966s]
Aug 26 23:42:42.983: INFO: Created: latency-svc-cnsrz
Aug 26 23:42:42.986: INFO: Got endpoints: latency-svc-cnsrz [1.128414754s]
Aug 26 23:42:43.056: INFO: Created: latency-svc-npzqx
Aug 26 23:42:43.066: INFO: Got endpoints: latency-svc-npzqx [1.148232233s]
Aug 26 23:42:43.121: INFO: Created: latency-svc-2r7qf
Aug 26 23:42:43.123: INFO: Got endpoints: latency-svc-2r7qf [1.148709851s]
Aug 26 23:42:43.174: INFO: Created: latency-svc-rp4nq
Aug 26 23:42:43.187: INFO: Got endpoints: latency-svc-rp4nq [1.166775675s]
Aug 26 23:42:43.350: INFO: Created: latency-svc-cfqt7
Aug 26 23:42:43.371: INFO: Got endpoints: latency-svc-cfqt7 [1.3150851s]
Aug 26 23:42:44.001: INFO: Created: latency-svc-vkz76
Aug 26 23:42:44.140: INFO: Got endpoints: latency-svc-vkz76 [2.017539153s]
Aug 26 23:42:44.170: INFO: Created: latency-svc-2nxpx
Aug 26 23:42:44.195: INFO: Got endpoints: latency-svc-2nxpx [2.030199212s]
Aug 26 23:42:44.238: INFO: Created: latency-svc-vvbfx
Aug 26 23:42:44.655: INFO: Got endpoints: latency-svc-vvbfx [2.359979371s]
Aug 26 23:42:44.660: INFO: Created: latency-svc-mq6pn
Aug 26 23:42:44.662: INFO: Got endpoints: latency-svc-mq6pn [2.315704369s]
Aug 26 23:42:44.731: INFO: Created: latency-svc-dshbb
Aug 26 23:42:44.841: INFO: Got endpoints: latency-svc-dshbb [2.366271476s]
Aug 26 23:42:44.959: INFO: Created: latency-svc-hs5vv
Aug 26 23:42:44.963: INFO: Got endpoints: latency-svc-hs5vv [2.472565825s]
Aug 26 23:42:45.015: INFO: Created: latency-svc-7mmmx
Aug 26 23:42:45.029: INFO: Got endpoints: latency-svc-7mmmx [2.502522823s]
Aug 26 23:42:45.127: INFO: Created: latency-svc-8khjx
Aug 26 23:42:45.129: INFO: Got endpoints: latency-svc-8khjx [2.526924257s]
Aug 26 23:42:45.279: INFO: Created: latency-svc-gcsvv
Aug 26 23:42:45.284: INFO: Got endpoints: latency-svc-gcsvv [2.484428121s]
Aug 26 23:42:45.313: INFO: Created: latency-svc-ndprs
Aug 26 23:42:45.328: INFO: Got endpoints: latency-svc-ndprs [2.477160429s]
Aug 26 23:42:45.349: INFO: Created: latency-svc-vgntr
Aug 26 23:42:45.367: INFO: Got endpoints: latency-svc-vgntr [82.900199ms]
Aug 26 23:42:45.427: INFO: Created: latency-svc-9zd59
Aug 26 23:42:45.447: INFO: Got endpoints: latency-svc-9zd59 [2.459884982s]
Aug 26 23:42:45.484: INFO: Created: latency-svc-l6t27
Aug 26 23:42:45.498: INFO: Got endpoints: latency-svc-l6t27 [2.431670018s]
Aug 26 23:42:45.524: INFO: Created: latency-svc-mqnsl
Aug 26 23:42:45.582: INFO: Got endpoints: latency-svc-mqnsl [2.458383638s]
Aug 26 23:42:45.589: INFO: Created: latency-svc-57jd8
Aug 26 23:42:45.606: INFO: Got endpoints: latency-svc-57jd8 [2.41894086s]
Aug 26 23:42:45.633: INFO: Created: latency-svc-qmz8t
Aug 26 23:42:45.642: INFO: Got endpoints: latency-svc-qmz8t [2.270076358s]
Aug 26 23:42:45.669: INFO: Created: latency-svc-ng4rv
Aug 26 23:42:45.672: INFO: Got endpoints: latency-svc-ng4rv [1.53173227s]
Aug 26 23:42:45.720: INFO: Created: latency-svc-k6gxc
Aug 26 23:42:45.726: INFO: Got endpoints: latency-svc-k6gxc [1.530440746s]
Aug 26 23:42:45.751: INFO: Created: latency-svc-gcn8j
Aug 26 23:42:45.768: INFO: Got endpoints: latency-svc-gcn8j [1.113231869s]
Aug 26 23:42:45.814: INFO: Created: latency-svc-zmh2d
Aug 26 23:42:45.857: INFO: Got endpoints: latency-svc-zmh2d [1.195179292s]
Aug 26 23:42:45.868: INFO: Created: latency-svc-9q682
Aug 26 23:42:45.877: INFO: Got endpoints: latency-svc-9q682 [1.035921584s]
Aug 26 23:42:45.903: INFO: Created: latency-svc-wjnjx
Aug 26 23:42:45.913: INFO: Got endpoints: latency-svc-wjnjx [950.585142ms]
Aug 26 23:42:45.936: INFO: Created: latency-svc-28pdq
Aug 26 23:42:45.955: INFO: Got endpoints: latency-svc-28pdq [925.683062ms]
Aug 26 23:42:46.007: INFO: Created: latency-svc-bggcd
Aug 26 23:42:46.010: INFO: Got endpoints: latency-svc-bggcd [880.372661ms]
Aug 26 23:42:46.036: INFO: Created: latency-svc-n5sv2
Aug 26 23:42:46.066: INFO: Got endpoints: latency-svc-n5sv2 [737.349501ms]
Aug 26 23:42:46.101: INFO: Created: latency-svc-l2cwm
Aug 26 23:42:46.145: INFO: Got endpoints: latency-svc-l2cwm [776.875931ms]
Aug 26 23:42:46.183: INFO: Created: latency-svc-n8kcp
Aug 26 23:42:46.208: INFO: Got endpoints: latency-svc-n8kcp [761.077952ms]
Aug 26 23:42:46.243: INFO: Created: latency-svc-zczrf
Aug 26 23:42:46.295: INFO: Got endpoints: latency-svc-zczrf [796.755773ms]
Aug 26 23:42:46.335: INFO: Created: latency-svc-25br5
Aug 26 23:42:46.383: INFO: Got endpoints: latency-svc-25br5 [800.630471ms]
Aug 26 23:42:46.483: INFO: Created: latency-svc-l6jcp
Aug 26 23:42:46.496: INFO: Got endpoints: latency-svc-l6jcp [889.700763ms]
Aug 26 23:42:46.539: INFO: Created: latency-svc-k4gtl
Aug 26 23:42:46.556: INFO: Got endpoints: latency-svc-k4gtl [914.166927ms]
Aug 26 23:42:46.636: INFO: Created: latency-svc-kpjsz
Aug 26 23:42:46.668: INFO: Got endpoints: latency-svc-kpjsz [996.200775ms]
Aug 26 23:42:46.670: INFO: Created: latency-svc-2jrpw
Aug 26 23:42:46.704: INFO: Got endpoints: latency-svc-2jrpw [978.09407ms]
Aug 26 23:42:46.777: INFO: Created: latency-svc-qzhhs
Aug 26 23:42:46.779: INFO: Got endpoints: latency-svc-qzhhs [1.010544401s]
Aug 26 23:42:46.846: INFO: Created: latency-svc-5pqms
Aug 26 23:42:46.905: INFO: Got endpoints: latency-svc-5pqms [1.047259363s]
Aug 26 23:42:46.920: INFO: Created: latency-svc-c2nfp
Aug 26 23:42:46.935: INFO: Got endpoints: latency-svc-c2nfp [1.058188567s]
Aug 26 23:42:46.981: INFO: Created: latency-svc-2w6xm
Aug 26 23:42:46.989: INFO: Got endpoints: latency-svc-2w6xm [1.075515314s]
Aug 26 23:42:47.069: INFO: Created: latency-svc-fzm58
Aug 26 23:42:47.069: INFO: Got endpoints: latency-svc-fzm58 [1.114644984s]
Aug 26 23:42:47.102: INFO: Created: latency-svc-rsxl6
Aug 26 23:42:47.122: INFO: Got endpoints: latency-svc-rsxl6 [1.111975833s]
Aug 26 23:42:47.154: INFO: Created: latency-svc-p6w9l
Aug 26 23:42:47.235: INFO: Got endpoints: latency-svc-p6w9l [1.168612869s]
Aug 26 23:42:47.237: INFO: Created: latency-svc-7jwfp
Aug 26 23:42:47.258: INFO: Got endpoints: latency-svc-7jwfp [1.113448848s]
Aug 26 23:42:47.302: INFO: Created: latency-svc-5vsjh
Aug 26 23:42:47.320: INFO: Got endpoints: latency-svc-5vsjh [1.112425892s]
Aug 26 23:42:47.366: INFO: Created: latency-svc-7v8sm
Aug 26 23:42:47.419: INFO: Got endpoints: latency-svc-7v8sm [1.123839841s]
Aug 26 23:42:47.421: INFO: Created: latency-svc-gvmvj
Aug 26 23:42:47.453: INFO: Got endpoints: latency-svc-gvmvj [1.06966767s]
Aug 26 23:42:47.505: INFO: Created: latency-svc-9sg79
Aug 26 23:42:47.518: INFO: Got endpoints: latency-svc-9sg79 [1.022296018s]
Aug 26 23:42:47.551: INFO: Created: latency-svc-h8mnb
Aug 26 23:42:47.585: INFO: Got endpoints: latency-svc-h8mnb [1.028353925s]
Aug 26 23:42:47.638: INFO: Created: latency-svc-zvj2x
Aug 26 23:42:47.642: INFO: Got endpoints: latency-svc-zvj2x [973.3029ms]
Aug 26 23:42:47.664: INFO: Created: latency-svc-4kbkx
Aug 26 23:42:47.684: INFO: Got endpoints: latency-svc-4kbkx [979.55777ms]
Aug 26 23:42:47.733: INFO: Created: latency-svc-wnxzb
Aug 26 23:42:47.791: INFO: Got endpoints: latency-svc-wnxzb [1.0122613s]
Aug 26 23:42:47.817: INFO: Created: latency-svc-c8jlj
Aug 26 23:42:47.838: INFO: Got endpoints: latency-svc-c8jlj [932.676655ms]
Aug 26 23:42:47.859: INFO: Created: latency-svc-ctt5n
Aug 26 23:42:47.874: INFO: Got endpoints: latency-svc-ctt5n [938.403148ms]
Aug 26 23:42:47.930: INFO: Created: latency-svc-mr6h6
Aug 26 23:42:47.946: INFO: Got endpoints: latency-svc-mr6h6 [955.980092ms]
Aug 26 23:42:47.970: INFO: Created: latency-svc-p75pg
Aug 26 23:42:47.988: INFO: Got endpoints: latency-svc-p75pg [918.297077ms]
Aug 26 23:42:48.012: INFO: Created: latency-svc-ptfjw
Aug 26 23:42:48.024: INFO: Got endpoints: latency-svc-ptfjw [902.612219ms]
Aug 26 23:42:48.080: INFO: Created: latency-svc-w6xsh
Aug 26 23:42:48.083: INFO: Got endpoints: latency-svc-w6xsh [847.622616ms]
Aug 26 23:42:48.105: INFO: Created: latency-svc-8vsjs
Aug 26 23:42:48.121: INFO: Got endpoints: latency-svc-8vsjs [862.001207ms]
Aug 26 23:42:48.144: INFO: Created: latency-svc-qr9gb
Aug 26 23:42:48.163: INFO: Got endpoints: latency-svc-qr9gb [842.553869ms]
Aug 26 23:42:48.241: INFO: Created: latency-svc-wvxqk
Aug 26 23:42:48.270: INFO: Got endpoints: latency-svc-wvxqk [851.107427ms]
Aug 26 23:42:48.327: INFO: Created: latency-svc-vsgg7
Aug 26 23:42:48.378: INFO: Got endpoints: latency-svc-vsgg7 [924.985021ms]
Aug 26 23:42:48.402: INFO: Created: latency-svc-9nmd2
Aug 26 23:42:48.428: INFO: Got endpoints: latency-svc-9nmd2 [909.07487ms]
Aug 26 23:42:48.462: INFO: Created: latency-svc-c7xbz
Aug 26 23:42:48.476: INFO: Got endpoints: latency-svc-c7xbz [890.861773ms]
Aug 26 23:42:48.540: INFO: Created: latency-svc-pdw6t
Aug 26 23:42:48.554: INFO: Got endpoints: latency-svc-pdw6t [912.034663ms]
Aug 26 23:42:48.594: INFO: Created: latency-svc-kbq7k
Aug 26 23:42:48.608: INFO: Got endpoints: latency-svc-kbq7k [923.662674ms]
Aug 26 23:42:48.631: INFO: Created: latency-svc-6t8gp
Aug 26 23:42:48.638: INFO: Got endpoints: latency-svc-6t8gp [846.281673ms]
Aug 26 23:42:48.714: INFO: Created: latency-svc-lhrzn
Aug 26 23:42:48.753: INFO: Got endpoints: latency-svc-lhrzn [915.077088ms]
Aug 26 23:42:48.789: INFO: Created: latency-svc-ftplj
Aug 26 23:42:48.807: INFO: Got endpoints: latency-svc-ftplj [932.936767ms]
Aug 26 23:42:48.852: INFO: Created: latency-svc-zjwwg
Aug 26 23:42:48.863: INFO: Got endpoints: latency-svc-zjwwg [917.640063ms]
Aug 26 23:42:48.912: INFO: Created: latency-svc-gk8mk
Aug 26 23:42:48.927: INFO: Got endpoints: latency-svc-gk8mk [939.019946ms]
Aug 26 23:42:48.952: INFO: Created: latency-svc-5n48s
Aug 26 23:42:49.019: INFO: Got endpoints: latency-svc-5n48s [994.023701ms]
Aug 26 23:42:49.021: INFO: Created: latency-svc-n78mf
Aug 26 23:42:49.056: INFO: Got endpoints: latency-svc-n78mf [973.077093ms]
Aug 26 23:42:49.105: INFO: Created: latency-svc-wjrt4
Aug 26 23:42:49.157: INFO: Got endpoints: latency-svc-wjrt4 [1.035623987s]
Aug 26 23:42:49.179: INFO: Created: latency-svc-nd8f2
Aug 26 23:42:49.198: INFO: Got endpoints: latency-svc-nd8f2 [1.034821875s]
Aug 26 23:42:49.222: INFO: Created: latency-svc-74rxt
Aug 26 23:42:49.302: INFO: Got endpoints: latency-svc-74rxt [1.031460544s]
Aug 26 23:42:49.345: INFO: Created: latency-svc-sjhk6
Aug 26 23:42:49.370: INFO: Got endpoints: latency-svc-sjhk6 [991.492591ms]
Aug 26 23:42:49.433: INFO: Created: latency-svc-9vq4k
Aug 26 23:42:49.435: INFO: Got endpoints: latency-svc-9vq4k [1.007283764s]
Aug 26 23:42:49.490: INFO: Created: latency-svc-99dk7
Aug 26 23:42:49.514: INFO: Got endpoints: latency-svc-99dk7 [1.037443754s]
Aug 26 23:42:49.576: INFO: Created: latency-svc-gmw55
Aug 26 23:42:49.602: INFO: Got endpoints: latency-svc-gmw55 [1.048114022s]
Aug 26 23:42:49.603: INFO: Created: latency-svc-4qq4x
Aug 26 23:42:49.616: INFO: Got endpoints: latency-svc-4qq4x [1.007492096s]
Aug 26 23:42:49.641: INFO: Created: latency-svc-sbspb
Aug 26 23:42:49.658: INFO: Got endpoints: latency-svc-sbspb [1.020002394s]
Aug 26 23:42:49.726: INFO: Created: latency-svc-q2qnb
Aug 26 23:42:49.729: INFO: Got endpoints: latency-svc-q2qnb [976.197922ms]
Aug 26 23:42:49.758: INFO: Created: latency-svc-rvx52
Aug 26 23:42:49.773: INFO: Got endpoints: latency-svc-rvx52 [965.659055ms]
Aug 26 23:42:49.888: INFO: Created: latency-svc-2ghvj
Aug 26 23:42:49.890: INFO: Got endpoints: latency-svc-2ghvj [1.026282298s]
Aug 26 23:42:49.941: INFO: Created: latency-svc-mtbfg
Aug 26 23:42:49.986: INFO: Got endpoints: latency-svc-mtbfg [1.058050166s]
Aug 26 23:42:50.040: INFO: Created: latency-svc-sfsvp
Aug 26 23:42:50.086: INFO: Got endpoints: latency-svc-sfsvp [1.066634078s]
Aug 26 23:42:50.211: INFO: Created: latency-svc-gsd4c
Aug 26 23:42:50.218: INFO: Got endpoints: latency-svc-gsd4c [1.161523989s]
Aug 26 23:42:50.245: INFO: Created: latency-svc-c5v24
Aug 26 23:42:50.259: INFO: Got endpoints: latency-svc-c5v24 [1.10258482s]
Aug 26 23:42:50.283: INFO: Created: latency-svc-wwwn7
Aug 26 23:42:50.302: INFO: Got endpoints: latency-svc-wwwn7 [1.103045961s]
Aug 26 23:42:50.348: INFO: Created: latency-svc-q29h4
Aug 26 23:42:50.353: INFO: Got endpoints: latency-svc-q29h4 [1.05080746s]
Aug 26 23:42:50.379: INFO: Created: latency-svc-n8cxm
Aug 26 23:42:50.399: INFO: Got endpoints: latency-svc-n8cxm [1.029301373s]
Aug 26 23:42:50.431: INFO: Created: latency-svc-g59n4
Aug 26 23:42:50.528: INFO: Got endpoints: latency-svc-g59n4 [1.092658085s]
Aug 26 23:42:50.553: INFO: Created: latency-svc-zmw5z
Aug 26 23:42:50.567: INFO: Got endpoints: latency-svc-zmw5z [1.052906114s]
Aug 26 23:42:50.598: INFO: Created: latency-svc-hjhtg
Aug 26 23:42:50.621: INFO: Got endpoints: latency-svc-hjhtg [1.019042872s]
Aug 26 23:42:50.672: INFO: Created: latency-svc-zv44s
Aug 26 23:42:50.691: INFO: Got endpoints: latency-svc-zv44s [1.074832819s]
Aug 26 23:42:50.739: INFO: Created: latency-svc-j8jm8
Aug 26 23:42:50.754: INFO: Got endpoints: latency-svc-j8jm8 [1.095496194s]
Aug 26 23:42:50.810: INFO: Created: latency-svc-cx9fx
Aug 26 23:42:50.811: INFO: Got endpoints: latency-svc-cx9fx [1.081645864s]
Aug 26 23:42:50.874: INFO: Created: latency-svc-6zw7z
Aug 26 23:42:50.899: INFO: Got endpoints: latency-svc-6zw7z [1.125517643s]
Aug 26 23:42:50.954: INFO: Created: latency-svc-xvqmp
Aug 26 23:42:50.958: INFO: Got endpoints: latency-svc-xvqmp [1.068174807s]
Aug 26 23:42:50.979: INFO: Created: latency-svc-zjz4c
Aug 26 23:42:51.000: INFO: Got endpoints: latency-svc-zjz4c [1.014028097s]
Aug 26 23:42:51.024: INFO: Created: latency-svc-tc8k2
Aug 26 23:42:51.115: INFO: Got endpoints: latency-svc-tc8k2 [1.028886979s]
Aug 26 23:42:51.117: INFO: Created: latency-svc-5tndj
Aug 26 23:42:51.139: INFO: Got endpoints: latency-svc-5tndj [921.228989ms]
Aug 26 23:42:51.195: INFO: Created: latency-svc-ztv2b
Aug 26 23:42:51.205: INFO: Got endpoints: latency-svc-ztv2b [945.211928ms]
Aug 26 23:42:51.259: INFO: Created: latency-svc-xv5p9
Aug 26 23:42:51.265: INFO: Got endpoints: latency-svc-xv5p9 [963.304358ms]
Aug 26 23:42:51.312: INFO: Created: latency-svc-9mm27
Aug 26 23:42:51.344: INFO: Got endpoints: latency-svc-9mm27 [990.799194ms]
Aug 26 23:42:51.409: INFO: Created: latency-svc-bb42t
Aug 26 23:42:51.411: INFO: Got endpoints: latency-svc-bb42t [1.011318249s]
Aug 26 23:42:51.450: INFO: Created: latency-svc-wb7s7
Aug 26 23:42:51.486: INFO: Got endpoints: latency-svc-wb7s7 [957.53771ms]
Aug 26 23:42:51.486: INFO: Latencies: [82.900199ms 161.275919ms 437.756961ms 502.591686ms 590.29621ms 656.586612ms 737.349501ms 739.893378ms 761.077952ms 776.875931ms 796.755773ms 800.630471ms 842.553869ms 846.281673ms 847.622616ms 851.107427ms 860.228658ms 861.911052ms 862.001207ms 878.488146ms 879.177575ms 880.280281ms 880.372661ms 885.990789ms 889.700763ms 890.861773ms 902.612219ms 909.07487ms 912.034663ms 914.166927ms 915.077088ms 916.450543ms 917.640063ms 918.297077ms 921.228989ms 923.662674ms 924.985021ms 925.683062ms 932.676655ms 932.936767ms 933.992038ms 938.403148ms 939.019946ms 939.731204ms 943.89749ms 945.211928ms 950.585142ms 955.344985ms 955.980092ms 957.53771ms 963.304358ms 965.594375ms 965.659055ms 973.077093ms 973.3029ms 976.197922ms 978.09407ms 978.685543ms 979.55777ms 983.666883ms 986.821184ms 990.799194ms 991.492591ms 991.965668ms 994.023701ms 996.200775ms 998.704233ms 1.007283764s 1.007492096s 1.010544401s 1.011318249s 1.0122613s 1.014028097s 1.019042872s 1.020002394s 1.022296018s 1.026282298s 1.028353925s 1.028886979s 1.029301373s 1.031460544s 1.034349188s 1.034821875s 1.035623987s 1.035921584s 1.037443754s 1.047259363s 1.048114022s 1.05080746s 1.052906114s 1.057661294s 1.058050166s 1.058188567s 1.066235507s 1.066634078s 1.068174807s 1.068853282s 1.06966767s 1.074832819s 1.075515314s 1.07832071s 1.081645864s 1.092658085s 1.095398678s 1.095496194s 1.10258482s 1.103045961s 1.111975833s 1.112425892s 1.113231869s 1.113448848s 1.114644984s 1.116429863s 1.120317966s 1.123839841s 1.125517643s 1.128414754s 1.141184933s 1.148232233s 1.148709851s 1.161523989s 1.166775675s 1.168612869s 1.1736893s 1.195179292s 1.207738226s 1.21529085s 1.237092062s 1.305214774s 1.3150851s 1.316954619s 1.329695148s 1.387834797s 1.525238545s 1.529833668s 1.530440746s 1.53173227s 1.550884939s 1.602207453s 1.606538026s 1.632647909s 1.654607701s 1.68624754s 1.686905326s 1.692114488s 1.704653575s 1.783264594s 1.785440763s 1.825653258s 1.905822774s 2.017539153s 2.023376576s 2.030199212s 2.093927351s 2.102512751s 2.113097667s 2.128745788s 2.142680447s 2.148234361s 2.179398779s 2.196960505s 2.202303502s 2.206082013s 2.210329333s 2.223439834s 2.270076358s 2.286888631s 2.315704369s 2.359979371s 2.366271476s 2.387913307s 2.41894086s 2.431670018s 2.458383638s 2.459884982s 2.472565825s 2.477160429s 2.484428121s 2.502522823s 2.506397811s 2.514428935s 2.526924257s 2.536263445s 2.547098243s 2.567937776s 2.59422683s 2.609568183s 2.62096336s 2.639247825s 2.670514119s 2.824807884s 2.915017324s 3.004436173s 3.004980062s 3.315204002s 3.324352152s 3.334746096s 3.459998436s 3.489649043s 3.532033552s]
Aug 26 23:42:51.487: INFO: 50 %ile: 1.07832071s
Aug 26 23:42:51.487: INFO: 90 %ile: 2.514428935s
Aug 26 23:42:51.487: INFO: 99 %ile: 3.489649043s
Aug 26 23:42:51.488: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 26 23:42:51.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-628" for this suite.
Aug 26 23:43:33.592: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 26 23:43:34.793: INFO: namespace svc-latency-628 deletion completed in 43.288508934s

• [SLOW TEST:68.464 seconds]
[sig-network] Service endpoints latency
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 26 23:43:34.795: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-1387
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-1387
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1387
Aug 26 23:43:35.080: INFO: Found 0 stateful pods, waiting for 1
Aug 26 23:43:45.119: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Aug 26 23:43:45.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 26 23:43:46.823: INFO: stderr: "I0826 23:43:46.652536    1866 log.go:172] (0x400072a4d0) (0x4000790640) Create stream\nI0826 23:43:46.654363    1866 log.go:172] (0x400072a4d0) (0x4000790640) Stream added, broadcasting: 1\nI0826 23:43:46.664391    1866 log.go:172] (0x400072a4d0) Reply frame received for 1\nI0826 23:43:46.665071    1866 log.go:172] (0x400072a4d0) (0x40007906e0) Create stream\nI0826 23:43:46.665129    1866 log.go:172] (0x400072a4d0) (0x40007906e0) Stream added, broadcasting: 3\nI0826 23:43:46.666729    1866 log.go:172] (0x400072a4d0) Reply frame received for 3\nI0826 23:43:46.666995    1866 log.go:172] (0x400072a4d0) (0x400064c320) Create stream\nI0826 23:43:46.667081    1866 log.go:172] (0x400072a4d0) (0x400064c320) Stream added, broadcasting: 5\nI0826 23:43:46.668543    1866 log.go:172] (0x400072a4d0) Reply frame received for 5\nI0826 23:43:46.730259    1866 log.go:172] (0x400072a4d0) Data frame received for 5\nI0826 23:43:46.730603    1866 log.go:172] (0x400064c320) (5) Data frame handling\nI0826 23:43:46.731256    1866 log.go:172] (0x400064c320) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0826 23:43:46.802193    1866 log.go:172] (0x400072a4d0) Data frame received for 5\nI0826 23:43:46.802299    1866 log.go:172] (0x400064c320) (5) Data frame handling\nI0826 23:43:46.802653    1866 log.go:172] (0x400072a4d0) Data frame received for 3\nI0826 23:43:46.802721    1866 log.go:172] (0x40007906e0) (3) Data frame handling\nI0826 23:43:46.802790    1866 log.go:172] (0x40007906e0) (3) Data frame sent\nI0826 23:43:46.802843    1866 log.go:172] (0x400072a4d0) Data frame received for 3\nI0826 23:43:46.802885    1866 log.go:172] (0x40007906e0) (3) Data frame handling\nI0826 23:43:46.804978    1866 log.go:172] (0x400072a4d0) Data frame received for 1\nI0826 23:43:46.805033    1866 log.go:172] (0x4000790640) (1) Data frame handling\nI0826 23:43:46.805108    1866 log.go:172] (0x4000790640) (1) Data frame sent\nI0826 23:43:46.805593    1866 log.go:172] (0x400072a4d0) (0x4000790640) Stream removed, broadcasting: 1\nI0826 23:43:46.808545    1866 log.go:172] (0x400072a4d0) (0x4000790640) Stream removed, broadcasting: 1\nI0826 23:43:46.808910    1866 log.go:172] (0x400072a4d0) (0x40007906e0) Stream removed, broadcasting: 3\nI0826 23:43:46.810520    1866 log.go:172] (0x400072a4d0) (0x400064c320) Stream removed, broadcasting: 5\n"
Aug 26 23:43:46.824: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 26 23:43:46.824: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 26 23:43:46.830: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Aug 26 23:43:56.837: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 26 23:43:56.837: INFO: Waiting for statefulset status.replicas updated to 0
Aug 26 23:43:56.877: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 26 23:43:56.880: INFO: ss-0  iruya-worker  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:35 +0000 UTC  }]
Aug 26 23:43:56.880: INFO: ss-1                Pending         []
Aug 26 23:43:56.881: INFO: 
Aug 26 23:43:56.881: INFO: StatefulSet ss has not reached scale 3, at 2
Aug 26 23:43:57.889: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.969971422s
Aug 26 23:43:59.065: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.961427543s
Aug 26 23:44:00.207: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.786170037s
Aug 26 23:44:01.215: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.643392008s
Aug 26 23:44:02.225: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.635372839s
Aug 26 23:44:03.238: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.625712655s
Aug 26 23:44:04.246: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.612609216s
Aug 26 23:44:05.255: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.605087125s
Aug 26 23:44:06.263: INFO: Verifying statefulset ss doesn't scale past 3 for another 595.494455ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1387
Aug 26 23:44:07.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 26 23:44:11.553: INFO: stderr: "I0826 23:44:11.406354    1891 log.go:172] (0x4000e28420) (0x4000874960) Create stream\nI0826 23:44:11.416635    1891 log.go:172] (0x4000e28420) (0x4000874960) Stream added, broadcasting: 1\nI0826 23:44:11.426754    1891 log.go:172] (0x4000e28420) Reply frame received for 1\nI0826 23:44:11.427447    1891 log.go:172] (0x4000e28420) (0x400065a320) Create stream\nI0826 23:44:11.427543    1891 log.go:172] (0x4000e28420) (0x400065a320) Stream added, broadcasting: 3\nI0826 23:44:11.429252    1891 log.go:172] (0x4000e28420) Reply frame received for 3\nI0826 23:44:11.429595    1891 log.go:172] (0x4000e28420) (0x4000874000) Create stream\nI0826 23:44:11.429703    1891 log.go:172] (0x4000e28420) (0x4000874000) Stream added, broadcasting: 5\nI0826 23:44:11.431280    1891 log.go:172] (0x4000e28420) Reply frame received for 5\nI0826 23:44:11.528275    1891 log.go:172] (0x4000e28420) Data frame received for 5\nI0826 23:44:11.528446    1891 log.go:172] (0x4000e28420) Data frame received for 1\nI0826 23:44:11.528577    1891 log.go:172] (0x4000e28420) Data frame received for 3\nI0826 23:44:11.528869    1891 log.go:172] (0x400065a320) (3) Data frame handling\nI0826 23:44:11.528974    1891 log.go:172] (0x4000874960) (1) Data frame handling\nI0826 23:44:11.529271    1891 log.go:172] (0x4000874000) (5) Data frame handling\nI0826 23:44:11.530152    1891 log.go:172] (0x4000874000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0826 23:44:11.530658    1891 log.go:172] (0x4000874960) (1) Data frame sent\nI0826 23:44:11.530898    1891 log.go:172] (0x4000e28420) Data frame received for 5\nI0826 23:44:11.530972    1891 log.go:172] (0x4000874000) (5) Data frame handling\nI0826 23:44:11.532021    1891 log.go:172] (0x400065a320) (3) Data frame sent\nI0826 23:44:11.532135    1891 log.go:172] (0x4000e28420) Data frame received for 3\nI0826 23:44:11.532200    1891 log.go:172] (0x400065a320) (3) Data frame handling\nI0826 23:44:11.533921    1891 log.go:172] (0x4000e28420) (0x4000874960) Stream removed, broadcasting: 1\nI0826 23:44:11.534511    1891 log.go:172] (0x4000e28420) Go away received\nI0826 23:44:11.537125    1891 log.go:172] (0x4000e28420) (0x4000874960) Stream removed, broadcasting: 1\nI0826 23:44:11.537358    1891 log.go:172] (0x4000e28420) (0x400065a320) Stream removed, broadcasting: 3\nI0826 23:44:11.537558    1891 log.go:172] (0x4000e28420) (0x4000874000) Stream removed, broadcasting: 5\n"
Aug 26 23:44:11.554: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 26 23:44:11.554: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 26 23:44:11.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 26 23:44:13.041: INFO: stderr: "I0826 23:44:12.950825    1928 log.go:172] (0x400012cdc0) (0x40004866e0) Create stream\nI0826 23:44:12.954681    1928 log.go:172] (0x400012cdc0) (0x40004866e0) Stream added, broadcasting: 1\nI0826 23:44:12.964623    1928 log.go:172] (0x400012cdc0) Reply frame received for 1\nI0826 23:44:12.965311    1928 log.go:172] (0x400012cdc0) (0x40006b8000) Create stream\nI0826 23:44:12.965386    1928 log.go:172] (0x400012cdc0) (0x40006b8000) Stream added, broadcasting: 3\nI0826 23:44:12.966701    1928 log.go:172] (0x400012cdc0) Reply frame received for 3\nI0826 23:44:12.966929    1928 log.go:172] (0x400012cdc0) (0x40006b80a0) Create stream\nI0826 23:44:12.966981    1928 log.go:172] (0x400012cdc0) (0x40006b80a0) Stream added, broadcasting: 5\nI0826 23:44:12.968628    1928 log.go:172] (0x400012cdc0) Reply frame received for 5\nI0826 23:44:13.020268    1928 log.go:172] (0x400012cdc0) Data frame received for 3\nI0826 23:44:13.020485    1928 log.go:172] (0x400012cdc0) Data frame received for 1\nI0826 23:44:13.020622    1928 log.go:172] (0x400012cdc0) Data frame received for 5\nI0826 23:44:13.020698    1928 log.go:172] (0x40006b80a0) (5) Data frame handling\nI0826 23:44:13.020895    1928 log.go:172] (0x40004866e0) (1) Data frame handling\nI0826 23:44:13.021138    1928 log.go:172] (0x40006b8000) (3) Data frame handling\nI0826 23:44:13.021495    1928 log.go:172] (0x40004866e0) (1) Data frame sent\nI0826 23:44:13.021787    1928 log.go:172] (0x40006b80a0) (5) Data frame sent\nI0826 23:44:13.022034    1928 log.go:172] (0x40006b8000) (3) Data frame sent\nI0826 23:44:13.022127    1928 log.go:172] (0x400012cdc0) Data frame received for 3\nI0826 23:44:13.022175    1928 log.go:172] (0x40006b8000) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0826 23:44:13.022322    1928 log.go:172] (0x400012cdc0) Data frame received for 5\nI0826 23:44:13.022392    1928 log.go:172] (0x40006b80a0) (5) Data frame handling\nI0826 23:44:13.023336    1928 log.go:172] (0x400012cdc0) (0x40004866e0) Stream removed, broadcasting: 1\nI0826 23:44:13.026580    1928 log.go:172] (0x400012cdc0) (0x40004866e0) Stream removed, broadcasting: 1\nI0826 23:44:13.026975    1928 log.go:172] (0x400012cdc0) (0x40006b8000) Stream removed, broadcasting: 3\nI0826 23:44:13.029803    1928 log.go:172] (0x400012cdc0) Go away received\nI0826 23:44:13.029960    1928 log.go:172] (0x400012cdc0) (0x40006b80a0) Stream removed, broadcasting: 5\n"
Aug 26 23:44:13.042: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 26 23:44:13.042: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 26 23:44:13.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 26 23:44:14.557: INFO: stderr: "I0826 23:44:14.435079    1951 log.go:172] (0x4000890000) (0x40009601e0) Create stream\nI0826 23:44:14.441250    1951 log.go:172] (0x4000890000) (0x40009601e0) Stream added, broadcasting: 1\nI0826 23:44:14.454598    1951 log.go:172] (0x4000890000) Reply frame received for 1\nI0826 23:44:14.455994    1951 log.go:172] (0x4000890000) (0x400068e320) Create stream\nI0826 23:44:14.456123    1951 log.go:172] (0x4000890000) (0x400068e320) Stream added, broadcasting: 3\nI0826 23:44:14.458814    1951 log.go:172] (0x4000890000) Reply frame received for 3\nI0826 23:44:14.459364    1951 log.go:172] (0x4000890000) (0x400068e3c0) Create stream\nI0826 23:44:14.459491    1951 log.go:172] (0x4000890000) (0x400068e3c0) Stream added, broadcasting: 5\nI0826 23:44:14.461596    1951 log.go:172] (0x4000890000) Reply frame received for 5\nI0826 23:44:14.534290    1951 log.go:172] (0x4000890000) Data frame received for 5\nI0826 23:44:14.534727    1951 log.go:172] (0x4000890000) Data frame received for 3\nI0826 23:44:14.534885    1951 log.go:172] (0x400068e320) (3) Data frame handling\nI0826 23:44:14.535189    1951 log.go:172] (0x400068e3c0) (5) Data frame handling\nI0826 23:44:14.535438    1951 log.go:172] (0x4000890000) Data frame received for 1\nI0826 23:44:14.535561    1951 log.go:172] (0x40009601e0) (1) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0826 23:44:14.537681    1951 log.go:172] (0x400068e3c0) (5) Data frame sent\nI0826 23:44:14.537803    1951 log.go:172] (0x40009601e0) (1) Data frame sent\nI0826 23:44:14.538087    1951 log.go:172] (0x400068e320) (3) Data frame sent\nI0826 23:44:14.538277    1951 log.go:172] (0x4000890000) Data frame received for 5\nI0826 23:44:14.538401    1951 log.go:172] (0x400068e3c0) (5) Data frame handling\nI0826 23:44:14.538509    1951 log.go:172] (0x4000890000) Data frame received for 3\nI0826 23:44:14.538623    1951 log.go:172] (0x400068e320) (3) Data frame handling\nI0826 23:44:14.541452    1951 log.go:172] (0x4000890000) (0x40009601e0) Stream removed, broadcasting: 1\nI0826 23:44:14.543098    1951 log.go:172] (0x4000890000) Go away received\nI0826 23:44:14.546598    1951 log.go:172] (0x4000890000) (0x40009601e0) Stream removed, broadcasting: 1\nI0826 23:44:14.546886    1951 log.go:172] (0x4000890000) (0x400068e320) Stream removed, broadcasting: 3\nI0826 23:44:14.547130    1951 log.go:172] (0x4000890000) (0x400068e3c0) Stream removed, broadcasting: 5\n"
Aug 26 23:44:14.558: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 26 23:44:14.558: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 26 23:44:14.565: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 23:44:14.565: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 23:44:14.565: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Aug 26 23:44:14.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 26 23:44:16.019: INFO: stderr: "I0826 23:44:15.903483    1973 log.go:172] (0x40002fc580) (0x40003dc6e0) Create stream\nI0826 23:44:15.906198    1973 log.go:172] (0x40002fc580) (0x40003dc6e0) Stream added, broadcasting: 1\nI0826 23:44:15.915440    1973 log.go:172] (0x40002fc580) Reply frame received for 1\nI0826 23:44:15.916049    1973 log.go:172] (0x40002fc580) (0x4000684320) Create stream\nI0826 23:44:15.916127    1973 log.go:172] (0x40002fc580) (0x4000684320) Stream added, broadcasting: 3\nI0826 23:44:15.918115    1973 log.go:172] (0x40002fc580) Reply frame received for 3\nI0826 23:44:15.918655    1973 log.go:172] (0x40002fc580) (0x40006843c0) Create stream\nI0826 23:44:15.918779    1973 log.go:172] (0x40002fc580) (0x40006843c0) Stream added, broadcasting: 5\nI0826 23:44:15.920615    1973 log.go:172] (0x40002fc580) Reply frame received for 5\nI0826 23:44:16.002338    1973 log.go:172] (0x40002fc580) Data frame received for 5\nI0826 23:44:16.002561    1973 log.go:172] (0x40002fc580) Data frame received for 1\nI0826 23:44:16.002756    1973 log.go:172] (0x40002fc580) Data frame received for 3\nI0826 23:44:16.002899    1973 log.go:172] (0x40006843c0) (5) Data frame handling\nI0826 23:44:16.003048    1973 log.go:172] (0x4000684320) (3) Data frame handling\nI0826 23:44:16.003197    1973 log.go:172] (0x40003dc6e0) (1) Data frame handling\nI0826 23:44:16.003867    1973 log.go:172] (0x40006843c0) (5) Data frame sent\nI0826 23:44:16.004108    1973 log.go:172] (0x40003dc6e0) (1) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0826 23:44:16.005146    1973 log.go:172] (0x4000684320) (3) Data frame sent\nI0826 23:44:16.005210    1973 log.go:172] (0x40002fc580) Data frame received for 3\nI0826 23:44:16.005937    1973 log.go:172] (0x40002fc580) Data frame received for 5\nI0826 23:44:16.006005    1973 log.go:172] (0x40002fc580) (0x40003dc6e0) Stream removed, broadcasting: 1\nI0826 23:44:16.006265    1973 log.go:172] (0x4000684320) (3) Data frame handling\nI0826 23:44:16.006394    1973 log.go:172] (0x40006843c0) (5) Data frame handling\nI0826 23:44:16.008986    1973 log.go:172] (0x40002fc580) (0x40003dc6e0) Stream removed, broadcasting: 1\nI0826 23:44:16.009182    1973 log.go:172] (0x40002fc580) Go away received\nI0826 23:44:16.009248    1973 log.go:172] (0x40002fc580) (0x4000684320) Stream removed, broadcasting: 3\nI0826 23:44:16.009974    1973 log.go:172] (0x40002fc580) (0x40006843c0) Stream removed, broadcasting: 5\n"
Aug 26 23:44:16.020: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 26 23:44:16.020: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 26 23:44:16.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 26 23:44:17.618: INFO: stderr: "I0826 23:44:17.435728    1996 log.go:172] (0x4000128dc0) (0x40006326e0) Create stream\nI0826 23:44:17.438506    1996 log.go:172] (0x4000128dc0) (0x40006326e0) Stream added, broadcasting: 1\nI0826 23:44:17.450402    1996 log.go:172] (0x4000128dc0) Reply frame received for 1\nI0826 23:44:17.451776    1996 log.go:172] (0x4000128dc0) (0x4000632780) Create stream\nI0826 23:44:17.451907    1996 log.go:172] (0x4000128dc0) (0x4000632780) Stream added, broadcasting: 3\nI0826 23:44:17.453705    1996 log.go:172] (0x4000128dc0) Reply frame received for 3\nI0826 23:44:17.453953    1996 log.go:172] (0x4000128dc0) (0x40007e4000) Create stream\nI0826 23:44:17.454023    1996 log.go:172] (0x4000128dc0) (0x40007e4000) Stream added, broadcasting: 5\nI0826 23:44:17.455349    1996 log.go:172] (0x4000128dc0) Reply frame received for 5\nI0826 23:44:17.553185    1996 log.go:172] (0x4000128dc0) Data frame received for 5\nI0826 23:44:17.553432    1996 log.go:172] (0x40007e4000) (5) Data frame handling\nI0826 23:44:17.553937    1996 log.go:172] (0x40007e4000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0826 23:44:17.591826    1996 log.go:172] (0x4000128dc0) Data frame received for 3\nI0826 23:44:17.591967    1996 log.go:172] (0x4000632780) (3) Data frame handling\nI0826 23:44:17.592123    1996 log.go:172] (0x4000128dc0) Data frame received for 5\nI0826 23:44:17.592375    1996 log.go:172] (0x40007e4000) (5) Data frame handling\nI0826 23:44:17.592616    1996 log.go:172] (0x4000632780) (3) Data frame sent\nI0826 23:44:17.592873    1996 log.go:172] (0x4000128dc0) Data frame received for 3\nI0826 23:44:17.593027    1996 log.go:172] (0x4000632780) (3) Data frame handling\nI0826 23:44:17.594121    1996 log.go:172] (0x4000128dc0) Data frame received for 1\nI0826 23:44:17.594266    1996 log.go:172] (0x40006326e0) (1) Data frame handling\nI0826 23:44:17.594464    1996 log.go:172] (0x40006326e0) (1) Data frame sent\nI0826 23:44:17.597112    1996 log.go:172] (0x4000128dc0) (0x40006326e0) Stream removed, broadcasting: 1\nI0826 23:44:17.600000    1996 log.go:172] (0x4000128dc0) Go away received\nI0826 23:44:17.605471    1996 log.go:172] (0x4000128dc0) (0x40006326e0) Stream removed, broadcasting: 1\nI0826 23:44:17.605784    1996 log.go:172] (0x4000128dc0) (0x4000632780) Stream removed, broadcasting: 3\nI0826 23:44:17.606040    1996 log.go:172] (0x4000128dc0) (0x40007e4000) Stream removed, broadcasting: 5\n"
Aug 26 23:44:17.619: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 26 23:44:17.619: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 26 23:44:17.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 26 23:44:19.447: INFO: stderr: "I0826 23:44:19.121257    2019 log.go:172] (0x400050c000) (0x40008401e0) Create stream\nI0826 23:44:19.123954    2019 log.go:172] (0x400050c000) (0x40008401e0) Stream added, broadcasting: 1\nI0826 23:44:19.136168    2019 log.go:172] (0x400050c000) Reply frame received for 1\nI0826 23:44:19.137120    2019 log.go:172] (0x400050c000) (0x4000650320) Create stream\nI0826 23:44:19.137213    2019 log.go:172] (0x400050c000) (0x4000650320) Stream added, broadcasting: 3\nI0826 23:44:19.138715    2019 log.go:172] (0x400050c000) Reply frame received for 3\nI0826 23:44:19.138952    2019 log.go:172] (0x400050c000) (0x40006503c0) Create stream\nI0826 23:44:19.139012    2019 log.go:172] (0x400050c000) (0x40006503c0) Stream added, broadcasting: 5\nI0826 23:44:19.140332    2019 log.go:172] (0x400050c000) Reply frame received for 5\nI0826 23:44:19.218800    2019 log.go:172] (0x400050c000) Data frame received for 5\nI0826 23:44:19.219079    2019 log.go:172] (0x40006503c0) (5) Data frame handling\nI0826 23:44:19.219614    2019 log.go:172] (0x40006503c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0826 23:44:19.423110    2019 log.go:172] (0x400050c000) Data frame received for 3\nI0826 23:44:19.424219    2019 log.go:172] (0x4000650320) (3) Data frame handling\nI0826 23:44:19.424477    2019 log.go:172] (0x400050c000) Data frame received for 5\nI0826 23:44:19.424673    2019 log.go:172] (0x40006503c0) (5) Data frame handling\nI0826 23:44:19.425015    2019 log.go:172] (0x400050c000) Data frame received for 1\nI0826 23:44:19.425136    2019 log.go:172] (0x40008401e0) (1) Data frame handling\nI0826 23:44:19.425277    2019 log.go:172] (0x40008401e0) (1) Data frame sent\nI0826 23:44:19.425471    2019 log.go:172] (0x4000650320) (3) Data frame sent\nI0826 23:44:19.425604    2019 log.go:172] (0x400050c000) Data frame received for 3\nI0826 23:44:19.425714    2019 log.go:172] (0x4000650320) (3) Data frame handling\nI0826 23:44:19.428280    2019 log.go:172] (0x400050c000) (0x40008401e0) Stream removed, broadcasting: 1\nI0826 23:44:19.431637    2019 log.go:172] (0x400050c000) Go away received\nI0826 23:44:19.434521    2019 log.go:172] (0x400050c000) (0x40008401e0) Stream removed, broadcasting: 1\nI0826 23:44:19.434855    2019 log.go:172] (0x400050c000) (0x4000650320) Stream removed, broadcasting: 3\nI0826 23:44:19.435044    2019 log.go:172] (0x400050c000) (0x40006503c0) Stream removed, broadcasting: 5\n"
Aug 26 23:44:19.448: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 26 23:44:19.448: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 26 23:44:19.448: INFO: Waiting for statefulset status.replicas updated to 0
Aug 26 23:44:19.501: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Aug 26 23:44:29.514: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 26 23:44:29.514: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Aug 26 23:44:29.514: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Aug 26 23:44:29.538: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 26 23:44:29.538: INFO: ss-0  iruya-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:35 +0000 UTC  }]
Aug 26 23:44:29.539: INFO: ss-1  iruya-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:56 +0000 UTC  }]
Aug 26 23:44:29.540: INFO: ss-2  iruya-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:56 +0000 UTC  }]
Aug 26 23:44:29.540: INFO: 
Aug 26 23:44:29.540: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 26 23:44:30.580: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 26 23:44:30.581: INFO: ss-0  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:35 +0000 UTC  }]
Aug 26 23:44:30.581: INFO: ss-1  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:56 +0000 UTC  }]
Aug 26 23:44:30.581: INFO: ss-2  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:56 +0000 UTC  }]
Aug 26 23:44:30.582: INFO: 
Aug 26 23:44:30.582: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 26 23:44:31.590: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 26 23:44:31.590: INFO: ss-0  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:35 +0000 UTC  }]
Aug 26 23:44:31.590: INFO: ss-1  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:56 +0000 UTC  }]
Aug 26 23:44:31.590: INFO: ss-2  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:56 +0000 UTC  }]
Aug 26 23:44:31.591: INFO: 
Aug 26 23:44:31.591: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 26 23:44:32.598: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 26 23:44:32.598: INFO: ss-0  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:35 +0000 UTC  }]
Aug 26 23:44:32.599: INFO: ss-1  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:56 +0000 UTC  }]
Aug 26 23:44:32.599: INFO: ss-2  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:56 +0000 UTC  }]
Aug 26 23:44:32.599: INFO: 
Aug 26 23:44:32.599: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 26 23:44:33.609: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 26 23:44:33.609: INFO: ss-0  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:35 +0000 UTC  }]
Aug 26 23:44:33.610: INFO: ss-1  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:56 +0000 UTC  }]
Aug 26 23:44:33.610: INFO: ss-2  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:56 +0000 UTC  }]
Aug 26 23:44:33.610: INFO: 
Aug 26 23:44:33.610: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 26 23:44:34.618: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 26 23:44:34.618: INFO: ss-1  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:56 +0000 UTC  }]
Aug 26 23:44:34.619: INFO: ss-2  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:56 +0000 UTC  }]
Aug 26 23:44:34.619: INFO: 
Aug 26 23:44:34.619: INFO: StatefulSet ss has not reached scale 0, at 2
Aug 26 23:44:35.628: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 26 23:44:35.628: INFO: ss-1  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:56 +0000 UTC  }]
Aug 26 23:44:35.629: INFO: ss-2  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:56 +0000 UTC  }]
Aug 26 23:44:35.629: INFO: 
Aug 26 23:44:35.629: INFO: StatefulSet ss has not reached scale 0, at 2
Aug 26 23:44:36.663: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 26 23:44:36.663: INFO: ss-1  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:56 +0000 UTC  }]
Aug 26 23:44:36.664: INFO: ss-2  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:56 +0000 UTC  }]
Aug 26 23:44:36.664: INFO: 
Aug 26 23:44:36.664: INFO: StatefulSet ss has not reached scale 0, at 2
Aug 26 23:44:37.673: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 26 23:44:37.673: INFO: ss-1  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:56 +0000 UTC  }]
Aug 26 23:44:37.673: INFO: ss-2  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:56 +0000 UTC  }]
Aug 26 23:44:37.674: INFO: 
Aug 26 23:44:37.674: INFO: StatefulSet ss has not reached scale 0, at 2
Aug 26 23:44:38.681: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 26 23:44:38.682: INFO: ss-1  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:56 +0000 UTC  }]
Aug 26 23:44:38.682: INFO: ss-2  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:44:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:43:56 +0000 UTC  }]
Aug 26 23:44:38.682: INFO: 
Aug 26 23:44:38.682: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1387
Aug 26 23:44:39.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 26 23:44:41.113: INFO: rc: 1
Aug 26 23:44:41.113: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0x4002ef66f0 exit status 1   true [0x4002e261b8 0x4002e261d0 0x4002e261e8] [0x4002e261b8 0x4002e261d0 0x4002e261e8] [0x4002e261c8 0x4002e261e0] [0xad5158 0xad5158] 0x400241de00 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Aug 26 23:44:51.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 26 23:44:52.373: INFO: rc: 1
Aug 26 23:44:52.373: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0x40022b5260 exit status 1   true [0x40021d6390 0x40021d63b0 0x40021d63c8] [0x40021d6390 0x40021d63b0 0x40021d63c8] [0x40021d63a0 0x40021d63c0] [0xad5158 0xad5158] 0x4002205f20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 23:45:02.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 26 23:45:03.745: INFO: rc: 1
Aug 26 23:45:03.745: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0x40030eb800 exit status 1   true [0x4002000568 0x4002000580 0x4002000598] [0x4002000568 0x4002000580 0x4002000598] [0x4002000578 0x4002000590] [0xad5158 0xad5158] 0x4001ab9440 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 23:45:13.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 26 23:45:14.988: INFO: rc: 1
Aug 26 23:45:14.988: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0x4003624090 exit status 1   true [0x40000ee728 0x40000ee8a8 0x40000ee9c8] [0x40000ee728 0x40000ee8a8 0x40000ee9c8] [0x40000ee870 0x40000ee990] [0xad5158 0xad5158] 0x400037cfc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 23:45:24.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 26 23:45:26.307: INFO: rc: 1
Aug 26 23:45:26.307: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0x40021e81e0 exit status 1   true [0x4000568b50 0x4000568db8 0x4000568e70] [0x4000568b50 0x4000568db8 0x4000568e70] [0x4000568d60 0x4000568e10] [0xad5158 0xad5158] 0x40022041e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 23:45:36.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 26 23:45:37.573: INFO: rc: 1
Aug 26 23:45:37.574: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0x40030fa0c0 exit status 1   true [0x40001aa000 0x4001c2e008 0x4001c2e020] [0x40001aa000 0x4001c2e008 0x4001c2e020] [0x4001c2e000 0x4001c2e018] [0xad5158 0xad5158] 0x4001eb5680 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 23:45:47.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 26 23:45:48.860: INFO: rc: 1
Aug 26 23:45:48.860: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0x4003a84090 exit status 1   true [0x4002e8a000 0x4002e8a018 0x4002e8a030] [0x4002e8a000 0x4002e8a018 0x4002e8a030] [0x4002e8a010 0x4002e8a028] [0xad5158 0xad5158] 0x400257fda0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 23:45:58.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 26 23:46:00.108: INFO: rc: 1
Aug 26 23:46:00.108: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0x40021e82d0 exit status 1   true [0x4000568ed0 0x4000569030 0x4000569250] [0x4000568ed0 0x4000569030 0x4000569250] [0x4000568f98 0x40005691b8] [0xad5158 0xad5158] 0x4002205bc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 23:46:10.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 26 23:46:11.383: INFO: rc: 1
Aug 26 23:46:11.384: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0x40021e8390 exit status 1   true [0x40005692a0 0x40005694d0 0x40005695d0] [0x40005692a0 0x40005694d0 0x40005695d0] [0x4000569498 0x4000569560] [0xad5158 0xad5158] 0x40028b2c60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 23:46:21.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 26 23:46:22.641: INFO: rc: 1
Aug 26 23:46:22.641: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0x40021e84b0 exit status 1   true [0x4000569640 0x4000569860 0x4000569af0] [0x4000569640 0x4000569860 0x4000569af0] [0x4000569830 0x4000569938] [0xad5158 0xad5158] 0x40028b36e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 23:46:32.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 26 23:46:33.903: INFO: rc: 1
Aug 26 23:46:33.903: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0x4003624180 exit status 1   true [0x40000ee9d0 0x40000ef168 0x40000ef360] [0x40000ee9d0 0x40000ef168 0x40000ef360] [0x40000eeaf8 0x40000ef288] [0xad5158 0xad5158] 0x40026e5200 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 23:46:43.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 26 23:46:45.170: INFO: rc: 1
Aug 26 23:46:45.171: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0x4003a841e0 exit status 1   true [0x4002e8a038 0x4002e8a050 0x4002e8a068] [0x4002e8a038 0x4002e8a050 0x4002e8a068] [0x4002e8a048 0x4002e8a060] [0xad5158 0xad5158] 0x4001faea20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 23:46:55.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 26 23:46:56.445: INFO: rc: 1
Aug 26 23:46:56.446: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0x40036242a0 exit status 1   true [0x40000ef3e8 0x40000ef6f0 0x40000ef8a8] [0x40000ef3e8 0x40000ef6f0 0x40000ef8a8] [0x40000ef4f8 0x40000ef7a8] [0xad5158 0xad5158] 0x400277c420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 23:47:06.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 26 23:47:07.763: INFO: rc: 1
Aug 26 23:47:07.764: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0x4003a842a0 exit status 1   true [0x4002e8a070 0x4002e8a088 0x4002e8a0a0] [0x4002e8a070 0x4002e8a088 0x4002e8a0a0] [0x4002e8a080 0x4002e8a098] [0xad5158 0xad5158] 0x40025867e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 23:47:17.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 26 23:47:19.008: INFO: rc: 1
Aug 26 23:47:19.009: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0x4003a84060 exit status 1   true [0x40001aa000 0x4002974010 0x4002974028] [0x40001aa000 0x4002974010 0x4002974028] [0x4002974008 0x4002974020] [0xad5158 0xad5158] 0x40026e4d20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 23:47:29.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 26 23:47:30.289: INFO: rc: 1
Aug 26 23:47:30.290: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0x40030fa090 exit status 1   true [0x4002e8a000 0x4002e8a018 0x4002e8a030] [0x4002e8a000 0x4002e8a018 0x4002e8a030] [0x4002e8a010 0x4002e8a028] [0xad5158 0xad5158] 0x400257e1e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 23:47:40.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 26 23:47:41.546: INFO: rc: 1
Aug 26 23:47:41.546: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0x40030fa180 exit status 1   true [0x4002e8a038 0x4002e8a050 0x4002e8a068] [0x4002e8a038 0x4002e8a050 0x4002e8a068] [0x4002e8a048 0x4002e8a060] [0xad5158 0xad5158] 0x4002204360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 23:47:51.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 26 23:47:52.818: INFO: rc: 1
Aug 26 23:47:52.818: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0x40030fa240 exit status 1   true [0x4002e8a078 0x4002e8a0b0 0x4002e8a0c8] [0x4002e8a078 0x4002e8a0b0 0x4002e8a0c8] [0x4002e8a0a8 0x4002e8a0c0] [0xad5158 0xad5158] 0x4002205f20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 23:48:02.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 26 23:48:04.165: INFO: rc: 1
Aug 26 23:48:04.166: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0x4003a841b0 exit status 1   true [0x4002974030 0x4002974048 0x4002974060] [0x4002974030 0x4002974048 0x4002974060] [0x4002974040 0x4002974058] [0xad5158 0xad5158] 0x40026e5bc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 23:48:14.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 26 23:48:15.410: INFO: rc: 1
Aug 26 23:48:15.410: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0x40036240c0 exit status 1   true [0x4000568b50 0x4000568db8 0x4000568e70] [0x4000568b50 0x4000568db8 0x4000568e70] [0x4000568d60 0x4000568e10] [0xad5158 0xad5158] 0x40000f0480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 23:48:25.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 26 23:48:26.682: INFO: rc: 1
Aug 26 23:48:26.682: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0x40030fa330 exit status 1   true [0x4002e8a0d0 0x4002e8a0e8 0x4002e8a100] [0x4002e8a0d0 0x4002e8a0e8 0x4002e8a100] [0x4002e8a0e0 0x4002e8a0f8] [0xad5158 0xad5158] 0x4002586780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 23:48:36.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 26 23:48:37.973: INFO: rc: 1
Aug 26 23:48:37.974: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0x40021e8270 exit status 1   true [0x4001c2e000 0x4001c2e018 0x4001c2e030] [0x4001c2e000 0x4001c2e018 0x4001c2e030] [0x4001c2e010 0x4001c2e028] [0xad5158 0xad5158] 0x40028b2c60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 23:48:47.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 26 23:48:49.232: INFO: rc: 1
Aug 26 23:48:49.232: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0x40021e8360 exit status 1   true [0x4001c2e038 0x4001c2e050 0x4001c2e068] [0x4001c2e038 0x4001c2e050 0x4001c2e068] [0x4001c2e048 0x4001c2e060] [0xad5158 0xad5158] 0x40028b36e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 23:48:59.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 26 23:49:00.546: INFO: rc: 1
Aug 26 23:49:00.546: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0x40030fa420 exit status 1   true [0x4002e8a108 0x4002e8a120 0x4002e8a138] [0x4002e8a108 0x4002e8a120 0x4002e8a138] [0x4002e8a118 0x4002e8a130] [0xad5158 0xad5158] 0x4002586c00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 23:49:10.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 26 23:49:11.784: INFO: rc: 1
Aug 26 23:49:11.785: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0x4003624030 exit status 1   true [0x40001aa490 0x4000568d60 0x4000568e10] [0x40001aa490 0x4000568d60 0x4000568e10] [0x4000568ce0 0x4000568df8] [0xad5158 0xad5158] 0x4001db3860 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 23:49:21.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 26 23:49:23.070: INFO: rc: 1
Aug 26 23:49:23.070: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0x4003624120 exit status 1   true [0x4000568e70 0x4000568f98 0x40005691b8] [0x4000568e70 0x4000568f98 0x40005691b8] [0x4000568ee8 0x4000569090] [0xad5158 0xad5158] 0x4002204360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 23:49:33.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 26 23:49:34.375: INFO: rc: 1
Aug 26 23:49:34.376: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0x4003624210 exit status 1   true [0x4000569250 0x4000569498 0x4000569560] [0x4000569250 0x4000569498 0x4000569560] [0x4000569398 0x4000569510] [0xad5158 0xad5158] 0x4002205f20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 23:49:44.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1387 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 26 23:49:45.699: INFO: rc: 1
Aug 26 23:49:45.699: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: 
Aug 26 23:49:45.700: INFO: Scaling statefulset ss to 0
Aug 26 23:49:45.716: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Aug 26 23:49:45.719: INFO: Deleting all statefulset in ns statefulset-1387
Aug 26 23:49:45.721: INFO: Scaling statefulset ss to 0
Aug 26 23:49:45.731: INFO: Waiting for statefulset status.replicas updated to 0
Aug 26 23:49:45.734: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 26 23:49:45.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1387" for this suite.
Aug 26 23:49:53.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 26 23:49:54.114: INFO: namespace statefulset-1387 deletion completed in 8.344467209s

• [SLOW TEST:379.319 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 26 23:49:54.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 26 23:50:32.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7490" for this suite.
Aug 26 23:50:38.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 26 23:50:38.819: INFO: namespace container-runtime-7490 deletion completed in 6.221065453s

• [SLOW TEST:44.702 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 26 23:50:38.821: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug 26 23:50:38.890: INFO: Waiting up to 5m0s for pod "pod-47ac3a87-8a0b-453b-9851-06f4d9c116d6" in namespace "emptydir-1164" to be "success or failure"
Aug 26 23:50:38.980: INFO: Pod "pod-47ac3a87-8a0b-453b-9851-06f4d9c116d6": Phase="Pending", Reason="", readiness=false. Elapsed: 89.370812ms
Aug 26 23:50:41.021: INFO: Pod "pod-47ac3a87-8a0b-453b-9851-06f4d9c116d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130991838s
Aug 26 23:50:43.028: INFO: Pod "pod-47ac3a87-8a0b-453b-9851-06f4d9c116d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.137305894s
STEP: Saw pod success
Aug 26 23:50:43.028: INFO: Pod "pod-47ac3a87-8a0b-453b-9851-06f4d9c116d6" satisfied condition "success or failure"
Aug 26 23:50:43.032: INFO: Trying to get logs from node iruya-worker pod pod-47ac3a87-8a0b-453b-9851-06f4d9c116d6 container test-container: 
STEP: delete the pod
Aug 26 23:50:43.115: INFO: Waiting for pod pod-47ac3a87-8a0b-453b-9851-06f4d9c116d6 to disappear
Aug 26 23:50:43.182: INFO: Pod pod-47ac3a87-8a0b-453b-9851-06f4d9c116d6 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 26 23:50:43.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1164" for this suite.
Aug 26 23:50:49.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 26 23:50:49.350: INFO: namespace emptydir-1164 deletion completed in 6.158773191s

• [SLOW TEST:10.530 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 26 23:50:49.353: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 26 23:50:49.440: INFO: Waiting up to 5m0s for pod "pod-4edfb4d8-fd12-42ed-a74e-6b5e5a9d5103" in namespace "emptydir-7830" to be "success or failure"
Aug 26 23:50:49.471: INFO: Pod "pod-4edfb4d8-fd12-42ed-a74e-6b5e5a9d5103": Phase="Pending", Reason="", readiness=false. Elapsed: 30.223485ms
Aug 26 23:50:51.632: INFO: Pod "pod-4edfb4d8-fd12-42ed-a74e-6b5e5a9d5103": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19180938s
Aug 26 23:50:53.651: INFO: Pod "pod-4edfb4d8-fd12-42ed-a74e-6b5e5a9d5103": Phase="Pending", Reason="", readiness=false. Elapsed: 4.210216784s
Aug 26 23:50:55.655: INFO: Pod "pod-4edfb4d8-fd12-42ed-a74e-6b5e5a9d5103": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.21481223s
STEP: Saw pod success
Aug 26 23:50:55.656: INFO: Pod "pod-4edfb4d8-fd12-42ed-a74e-6b5e5a9d5103" satisfied condition "success or failure"
Aug 26 23:50:55.659: INFO: Trying to get logs from node iruya-worker2 pod pod-4edfb4d8-fd12-42ed-a74e-6b5e5a9d5103 container test-container: 
STEP: delete the pod
Aug 26 23:50:55.903: INFO: Waiting for pod pod-4edfb4d8-fd12-42ed-a74e-6b5e5a9d5103 to disappear
Aug 26 23:50:55.907: INFO: Pod pod-4edfb4d8-fd12-42ed-a74e-6b5e5a9d5103 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 26 23:50:55.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7830" for this suite.
Aug 26 23:51:01.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 26 23:51:02.053: INFO: namespace emptydir-7830 deletion completed in 6.137281898s

• [SLOW TEST:12.700 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 26 23:51:02.053: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 26 23:51:02.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7531'
Aug 26 23:51:03.823: INFO: stderr: ""
Aug 26 23:51:03.823: INFO: stdout: "replicationcontroller/redis-master created\n"
Aug 26 23:51:03.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7531'
Aug 26 23:51:06.013: INFO: stderr: ""
Aug 26 23:51:06.013: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Aug 26 23:51:07.021: INFO: Selector matched 1 pods for map[app:redis]
Aug 26 23:51:07.021: INFO: Found 0 / 1
Aug 26 23:51:08.020: INFO: Selector matched 1 pods for map[app:redis]
Aug 26 23:51:08.020: INFO: Found 1 / 1
Aug 26 23:51:08.021: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 26 23:51:08.025: INFO: Selector matched 1 pods for map[app:redis]
Aug 26 23:51:08.025: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 26 23:51:08.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-rsxsz --namespace=kubectl-7531'
Aug 26 23:51:09.409: INFO: stderr: ""
Aug 26 23:51:09.409: INFO: stdout: "Name:           redis-master-rsxsz\nNamespace:      kubectl-7531\nPriority:       0\nNode:           iruya-worker/172.18.0.9\nStart Time:     Wed, 26 Aug 2020 23:51:03 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.244.1.139\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   containerd://8557aa70d2c7faf4d8d052c5bec34c787f122b6d7690decf3116387aa4aaa727\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Wed, 26 Aug 2020 23:51:07 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-mxvjs (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-mxvjs:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-mxvjs\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                   Message\n  ----    ------     ----  ----                   -------\n  Normal  Scheduled  6s    default-scheduler      Successfully assigned kubectl-7531/redis-master-rsxsz to iruya-worker\n  Normal  Pulled     4s    kubelet, iruya-worker  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    3s    kubelet, iruya-worker  Created container redis-master\n  Normal  Started    2s    kubelet, iruya-worker  Started container redis-master\n"
Aug 26 23:51:09.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-7531'
Aug 26 23:51:10.821: INFO: stderr: ""
Aug 26 23:51:10.821: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-7531\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  7s    replication-controller  Created pod: redis-master-rsxsz\n"
Aug 26 23:51:10.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-7531'
Aug 26 23:51:12.139: INFO: stderr: ""
Aug 26 23:51:12.139: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-7531\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.110.168.200\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.244.1.139:6379\nSession Affinity:  None\nEvents:            \n"
Aug 26 23:51:12.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane'
Aug 26 23:51:13.493: INFO: stderr: ""
Aug 26 23:51:13.493: INFO: stdout: "Name:               iruya-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 15 Aug 2020 09:34:51 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Wed, 26 Aug 2020 23:50:59 +0000   Sat, 15 Aug 2020 09:34:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Wed, 26 Aug 2020 23:50:59 +0000   Sat, 15 Aug 2020 09:34:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Wed, 26 Aug 2020 23:50:59 +0000   Sat, 15 Aug 2020 09:34:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Wed, 26 Aug 2020 23:50:59 +0000   Sat, 15 Aug 2020 09:35:31 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.7\n  Hostname:    iruya-control-plane\nCapacity:\n cpu:                16\n ephemeral-storage:  2303189964Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             131759872Ki\n pods:               110\nAllocatable:\n cpu:                16\n ephemeral-storage:  2303189964Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             131759872Ki\n pods:               110\nSystem Info:\n Machine ID:                 3ed9130db08840259d2231bd97220883\n System UUID:                e52cc602-b019-45cd-b06f-235cc5705532\n Boot ID:                    11738d2d-5baa-4089-8e7f-2fb0329fce58\n Kernel Version:             4.15.0-109-generic\n OS Image:                   Ubuntu 20.04 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  containerd://1.4.0-beta.1-85-g334f567e\n Kubelet Version:            v1.15.12\n Kube-Proxy Version:         v1.15.12\nPodCIDR:                     10.244.0.0/24\nNon-terminated Pods:         (9 in total)\n  Namespace                  Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                                           ------------  ----------  ---------------  -------------  ---\n  kube-system                coredns-5d4dd4b4db-6krdd                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     11d\n  kube-system                coredns-5d4dd4b4db-htp88                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     11d\n  kube-system                etcd-iruya-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11d\n  kube-system                kindnet-gvnsh                                  100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      11d\n  kube-system                kube-apiserver-iruya-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         11d\n  kube-system                kube-controller-manager-iruya-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         11d\n  kube-system                kube-proxy-ndl9h                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         11d\n  kube-system                kube-scheduler-iruya-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         11d\n  local-path-storage         local-path-provisioner-668779bd7-g227z         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11d\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (5%)   100m (0%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\nEvents:              \n"
Aug 26 23:51:13.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-7531'
Aug 26 23:51:14.810: INFO: stderr: ""
Aug 26 23:51:14.811: INFO: stdout: "Name:         kubectl-7531\nLabels:       e2e-framework=kubectl\n              e2e-run=19792bb5-9998-4e02-9ec7-df5bf5aadd94\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 26 23:51:14.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7531" for this suite.
Aug 26 23:51:36.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 26 23:51:36.991: INFO: namespace kubectl-7531 deletion completed in 22.172183791s

• [SLOW TEST:34.938 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 26 23:51:36.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-7521
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 26 23:51:37.115: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 26 23:52:05.328: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.100:8080/dial?request=hostName&protocol=http&host=10.244.2.99&port=8080&tries=1'] Namespace:pod-network-test-7521 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 23:52:05.329: INFO: >>> kubeConfig: /root/.kube/config
I0826 23:52:05.397199       7 log.go:172] (0x4000cd6420) (0x4000a63720) Create stream
I0826 23:52:05.397401       7 log.go:172] (0x4000cd6420) (0x4000a63720) Stream added, broadcasting: 1
I0826 23:52:05.404898       7 log.go:172] (0x4000cd6420) Reply frame received for 1
I0826 23:52:05.405145       7 log.go:172] (0x4000cd6420) (0x40023bd180) Create stream
I0826 23:52:05.405274       7 log.go:172] (0x4000cd6420) (0x40023bd180) Stream added, broadcasting: 3
I0826 23:52:05.407441       7 log.go:172] (0x4000cd6420) Reply frame received for 3
I0826 23:52:05.407597       7 log.go:172] (0x4000cd6420) (0x4000a63860) Create stream
I0826 23:52:05.407673       7 log.go:172] (0x4000cd6420) (0x4000a63860) Stream added, broadcasting: 5
I0826 23:52:05.409234       7 log.go:172] (0x4000cd6420) Reply frame received for 5
I0826 23:52:05.470353       7 log.go:172] (0x4000cd6420) Data frame received for 3
I0826 23:52:05.470500       7 log.go:172] (0x40023bd180) (3) Data frame handling
I0826 23:52:05.470606       7 log.go:172] (0x40023bd180) (3) Data frame sent
I0826 23:52:05.470981       7 log.go:172] (0x4000cd6420) Data frame received for 3
I0826 23:52:05.471102       7 log.go:172] (0x40023bd180) (3) Data frame handling
I0826 23:52:05.471245       7 log.go:172] (0x4000cd6420) Data frame received for 5
I0826 23:52:05.471332       7 log.go:172] (0x4000a63860) (5) Data frame handling
I0826 23:52:05.472669       7 log.go:172] (0x4000cd6420) Data frame received for 1
I0826 23:52:05.472891       7 log.go:172] (0x4000a63720) (1) Data frame handling
I0826 23:52:05.473028       7 log.go:172] (0x4000a63720) (1) Data frame sent
I0826 23:52:05.473120       7 log.go:172] (0x4000cd6420) (0x4000a63720) Stream removed, broadcasting: 1
I0826 23:52:05.473226       7 log.go:172] (0x4000cd6420) Go away received
I0826 23:52:05.473669       7 log.go:172] (0x4000cd6420) (0x4000a63720) Stream removed, broadcasting: 1
I0826 23:52:05.473940       7 log.go:172] (0x4000cd6420) (0x40023bd180) Stream removed, broadcasting: 3
I0826 23:52:05.474078       7 log.go:172] (0x4000cd6420) (0x4000a63860) Stream removed, broadcasting: 5
Aug 26 23:52:05.475: INFO: Waiting for endpoints: map[]
Aug 26 23:52:05.480: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.100:8080/dial?request=hostName&protocol=http&host=10.244.1.140&port=8080&tries=1'] Namespace:pod-network-test-7521 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 23:52:05.480: INFO: >>> kubeConfig: /root/.kube/config
I0826 23:52:05.543099       7 log.go:172] (0x40027c89a0) (0x4003590dc0) Create stream
I0826 23:52:05.543271       7 log.go:172] (0x40027c89a0) (0x4003590dc0) Stream added, broadcasting: 1
I0826 23:52:05.547089       7 log.go:172] (0x40027c89a0) Reply frame received for 1
I0826 23:52:05.547233       7 log.go:172] (0x40027c89a0) (0x40021be0a0) Create stream
I0826 23:52:05.547315       7 log.go:172] (0x40027c89a0) (0x40021be0a0) Stream added, broadcasting: 3
I0826 23:52:05.548877       7 log.go:172] (0x40027c89a0) Reply frame received for 3
I0826 23:52:05.549052       7 log.go:172] (0x40027c89a0) (0x4003590e60) Create stream
I0826 23:52:05.549147       7 log.go:172] (0x40027c89a0) (0x4003590e60) Stream added, broadcasting: 5
I0826 23:52:05.550665       7 log.go:172] (0x40027c89a0) Reply frame received for 5
I0826 23:52:05.613696       7 log.go:172] (0x40027c89a0) Data frame received for 3
I0826 23:52:05.613940       7 log.go:172] (0x40021be0a0) (3) Data frame handling
I0826 23:52:05.614103       7 log.go:172] (0x40027c89a0) Data frame received for 5
I0826 23:52:05.614261       7 log.go:172] (0x4003590e60) (5) Data frame handling
I0826 23:52:05.614388       7 log.go:172] (0x40021be0a0) (3) Data frame sent
I0826 23:52:05.614464       7 log.go:172] (0x40027c89a0) Data frame received for 3
I0826 23:52:05.614546       7 log.go:172] (0x40021be0a0) (3) Data frame handling
I0826 23:52:05.615362       7 log.go:172] (0x40027c89a0) Data frame received for 1
I0826 23:52:05.615446       7 log.go:172] (0x4003590dc0) (1) Data frame handling
I0826 23:52:05.615516       7 log.go:172] (0x4003590dc0) (1) Data frame sent
I0826 23:52:05.615594       7 log.go:172] (0x40027c89a0) (0x4003590dc0) Stream removed, broadcasting: 1
I0826 23:52:05.615851       7 log.go:172] (0x40027c89a0) Go away received
I0826 23:52:05.616011       7 log.go:172] (0x40027c89a0) (0x4003590dc0) Stream removed, broadcasting: 1
I0826 23:52:05.616117       7 log.go:172] (0x40027c89a0) (0x40021be0a0) Stream removed, broadcasting: 3
I0826 23:52:05.616237       7 log.go:172] (0x40027c89a0) (0x4003590e60) Stream removed, broadcasting: 5
Aug 26 23:52:05.616: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 26 23:52:05.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-7521" for this suite.
Aug 26 23:52:29.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 26 23:52:29.833: INFO: namespace pod-network-test-7521 deletion completed in 24.208310982s

• [SLOW TEST:52.840 seconds]
[sig-network] Networking
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 26 23:52:29.835: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Aug 26 23:52:36.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-ccfd01e4-6ffa-4db2-b23d-2302efb6aa4a -c busybox-main-container --namespace=emptydir-713 -- cat /usr/share/volumeshare/shareddata.txt'
Aug 26 23:52:37.527: INFO: stderr: "I0826 23:52:37.405714    2827 log.go:172] (0x40006b6000) (0x40007ee140) Create stream\nI0826 23:52:37.408345    2827 log.go:172] (0x40006b6000) (0x40007ee140) Stream added, broadcasting: 1\nI0826 23:52:37.421389    2827 log.go:172] (0x40006b6000) Reply frame received for 1\nI0826 23:52:37.421995    2827 log.go:172] (0x40006b6000) (0x400094c000) Create stream\nI0826 23:52:37.422076    2827 log.go:172] (0x40006b6000) (0x400094c000) Stream added, broadcasting: 3\nI0826 23:52:37.423414    2827 log.go:172] (0x40006b6000) Reply frame received for 3\nI0826 23:52:37.423628    2827 log.go:172] (0x40006b6000) (0x40007ee320) Create stream\nI0826 23:52:37.423675    2827 log.go:172] (0x40006b6000) (0x40007ee320) Stream added, broadcasting: 5\nI0826 23:52:37.425294    2827 log.go:172] (0x40006b6000) Reply frame received for 5\nI0826 23:52:37.506095    2827 log.go:172] (0x40006b6000) Data frame received for 3\nI0826 23:52:37.506503    2827 log.go:172] (0x40006b6000) Data frame received for 1\nI0826 23:52:37.506610    2827 log.go:172] (0x400094c000) (3) Data frame handling\nI0826 23:52:37.506764    2827 log.go:172] (0x40006b6000) Data frame received for 5\nI0826 23:52:37.506846    2827 log.go:172] (0x40007ee320) (5) Data frame handling\nI0826 23:52:37.507021    2827 log.go:172] (0x40007ee140) (1) Data frame handling\nI0826 23:52:37.510210    2827 log.go:172] (0x40007ee140) (1) Data frame sent\nI0826 23:52:37.510444    2827 log.go:172] (0x400094c000) (3) Data frame sent\nI0826 23:52:37.510699    2827 log.go:172] (0x40006b6000) Data frame received for 3\nI0826 23:52:37.510865    2827 log.go:172] (0x400094c000) (3) Data frame handling\nI0826 23:52:37.512417    2827 log.go:172] (0x40006b6000) (0x40007ee140) Stream removed, broadcasting: 1\nI0826 23:52:37.514026    2827 log.go:172] (0x40006b6000) Go away received\nI0826 23:52:37.517217    2827 log.go:172] (0x40006b6000) (0x40007ee140) Stream removed, broadcasting: 1\nI0826 23:52:37.517513    2827 log.go:172] (0x40006b6000) (0x400094c000) Stream removed, broadcasting: 3\nI0826 23:52:37.517727    2827 log.go:172] (0x40006b6000) (0x40007ee320) Stream removed, broadcasting: 5\n"
Aug 26 23:52:37.527: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 26 23:52:37.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-713" for this suite.
Aug 26 23:52:43.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 26 23:52:43.731: INFO: namespace emptydir-713 deletion completed in 6.190838308s

• [SLOW TEST:13.897 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 26 23:52:43.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0826 23:52:55.432219       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 26 23:52:55.432: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 26 23:52:55.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9959" for this suite.
Aug 26 23:53:05.580: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 26 23:53:05.717: INFO: namespace gc-9959 deletion completed in 10.276719406s

• [SLOW TEST:21.986 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 26 23:53:05.725: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-8bd43856-b28c-479f-8cf6-1f4807cda5c1
STEP: Creating a pod to test consume secrets
Aug 26 23:53:05.813: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4362f275-e087-4c2e-a014-38be833961e8" in namespace "projected-9967" to be "success or failure"
Aug 26 23:53:05.824: INFO: Pod "pod-projected-secrets-4362f275-e087-4c2e-a014-38be833961e8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.13419ms
Aug 26 23:53:07.831: INFO: Pod "pod-projected-secrets-4362f275-e087-4c2e-a014-38be833961e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016961508s
Aug 26 23:53:09.850: INFO: Pod "pod-projected-secrets-4362f275-e087-4c2e-a014-38be833961e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036151696s
STEP: Saw pod success
Aug 26 23:53:09.850: INFO: Pod "pod-projected-secrets-4362f275-e087-4c2e-a014-38be833961e8" satisfied condition "success or failure"
Aug 26 23:53:09.886: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-4362f275-e087-4c2e-a014-38be833961e8 container projected-secret-volume-test: 
STEP: delete the pod
Aug 26 23:53:09.914: INFO: Waiting for pod pod-projected-secrets-4362f275-e087-4c2e-a014-38be833961e8 to disappear
Aug 26 23:53:09.919: INFO: Pod pod-projected-secrets-4362f275-e087-4c2e-a014-38be833961e8 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 26 23:53:09.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9967" for this suite.
Aug 26 23:53:15.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 26 23:53:16.117: INFO: namespace projected-9967 deletion completed in 6.188349861s

• [SLOW TEST:10.393 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 26 23:53:16.120: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 26 23:53:16.395: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Aug 26 23:53:17.623: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 26 23:53:17.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8109" for this suite.
Aug 26 23:53:25.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 26 23:53:25.826: INFO: namespace replication-controller-8109 deletion completed in 8.164136764s

• [SLOW TEST:9.706 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 26 23:53:25.828: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Aug 26 23:53:25.910: INFO: Waiting up to 5m0s for pod "var-expansion-705779b1-7383-47ea-857e-861156a1ee02" in namespace "var-expansion-3484" to be "success or failure"
Aug 26 23:53:25.957: INFO: Pod "var-expansion-705779b1-7383-47ea-857e-861156a1ee02": Phase="Pending", Reason="", readiness=false. Elapsed: 47.199528ms
Aug 26 23:53:27.964: INFO: Pod "var-expansion-705779b1-7383-47ea-857e-861156a1ee02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05362528s
Aug 26 23:53:29.972: INFO: Pod "var-expansion-705779b1-7383-47ea-857e-861156a1ee02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06139326s
STEP: Saw pod success
Aug 26 23:53:29.972: INFO: Pod "var-expansion-705779b1-7383-47ea-857e-861156a1ee02" satisfied condition "success or failure"
Aug 26 23:53:29.977: INFO: Trying to get logs from node iruya-worker pod var-expansion-705779b1-7383-47ea-857e-861156a1ee02 container dapi-container: 
STEP: delete the pod
Aug 26 23:53:29.998: INFO: Waiting for pod var-expansion-705779b1-7383-47ea-857e-861156a1ee02 to disappear
Aug 26 23:53:30.003: INFO: Pod var-expansion-705779b1-7383-47ea-857e-861156a1ee02 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 26 23:53:30.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3484" for this suite.
Aug 26 23:53:36.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 26 23:53:36.246: INFO: namespace var-expansion-3484 deletion completed in 6.235776929s

• [SLOW TEST:10.419 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 26 23:53:36.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-lm65
STEP: Creating a pod to test atomic-volume-subpath
Aug 26 23:53:36.373: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-lm65" in namespace "subpath-6431" to be "success or failure"
Aug 26 23:53:36.381: INFO: Pod "pod-subpath-test-configmap-lm65": Phase="Pending", Reason="", readiness=false. Elapsed: 7.486111ms
Aug 26 23:53:38.387: INFO: Pod "pod-subpath-test-configmap-lm65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014146409s
Aug 26 23:53:40.395: INFO: Pod "pod-subpath-test-configmap-lm65": Phase="Running", Reason="", readiness=true. Elapsed: 4.021318546s
Aug 26 23:53:42.401: INFO: Pod "pod-subpath-test-configmap-lm65": Phase="Running", Reason="", readiness=true. Elapsed: 6.028037159s
Aug 26 23:53:44.409: INFO: Pod "pod-subpath-test-configmap-lm65": Phase="Running", Reason="", readiness=true. Elapsed: 8.035363186s
Aug 26 23:53:46.416: INFO: Pod "pod-subpath-test-configmap-lm65": Phase="Running", Reason="", readiness=true. Elapsed: 10.042530034s
Aug 26 23:53:48.422: INFO: Pod "pod-subpath-test-configmap-lm65": Phase="Running", Reason="", readiness=true. Elapsed: 12.049114921s
Aug 26 23:53:50.427: INFO: Pod "pod-subpath-test-configmap-lm65": Phase="Running", Reason="", readiness=true. Elapsed: 14.054267744s
Aug 26 23:53:52.435: INFO: Pod "pod-subpath-test-configmap-lm65": Phase="Running", Reason="", readiness=true. Elapsed: 16.061644097s
Aug 26 23:53:54.443: INFO: Pod "pod-subpath-test-configmap-lm65": Phase="Running", Reason="", readiness=true. Elapsed: 18.06935112s
Aug 26 23:53:56.449: INFO: Pod "pod-subpath-test-configmap-lm65": Phase="Running", Reason="", readiness=true. Elapsed: 20.075553527s
Aug 26 23:53:58.456: INFO: Pod "pod-subpath-test-configmap-lm65": Phase="Running", Reason="", readiness=true. Elapsed: 22.082341953s
Aug 26 23:54:00.719: INFO: Pod "pod-subpath-test-configmap-lm65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.345865928s
STEP: Saw pod success
Aug 26 23:54:00.719: INFO: Pod "pod-subpath-test-configmap-lm65" satisfied condition "success or failure"
Aug 26 23:54:00.787: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-lm65 container test-container-subpath-configmap-lm65: 
STEP: delete the pod
Aug 26 23:54:01.199: INFO: Waiting for pod pod-subpath-test-configmap-lm65 to disappear
Aug 26 23:54:01.498: INFO: Pod pod-subpath-test-configmap-lm65 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-lm65
Aug 26 23:54:01.498: INFO: Deleting pod "pod-subpath-test-configmap-lm65" in namespace "subpath-6431"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 26 23:54:01.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6431" for this suite.
Aug 26 23:54:07.601: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 26 23:54:07.759: INFO: namespace subpath-6431 deletion completed in 6.24713776s

• [SLOW TEST:31.511 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 26 23:54:07.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 26 23:54:07.893: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-8129
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Aug 26 23:54:14.271: INFO: Found 0 stateful pods, waiting for 3
Aug 26 23:54:24.280: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 23:54:24.280: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 23:54:24.280: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 26 23:54:34.279: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 23:54:34.279: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 23:54:34.279: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 23:54:34.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8129 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 26 23:54:38.390: INFO: stderr: "I0826 23:54:38.226232    2849 log.go:172] (0x4000b42420) (0x4000770a00) Create stream\nI0826 23:54:38.230992    2849 log.go:172] (0x4000b42420) (0x4000770a00) Stream added, broadcasting: 1\nI0826 23:54:38.249816    2849 log.go:172] (0x4000b42420) Reply frame received for 1\nI0826 23:54:38.250708    2849 log.go:172] (0x4000b42420) (0x4000770aa0) Create stream\nI0826 23:54:38.250830    2849 log.go:172] (0x4000b42420) (0x4000770aa0) Stream added, broadcasting: 3\nI0826 23:54:38.253382    2849 log.go:172] (0x4000b42420) Reply frame received for 3\nI0826 23:54:38.253867    2849 log.go:172] (0x4000b42420) (0x4000770b40) Create stream\nI0826 23:54:38.253982    2849 log.go:172] (0x4000b42420) (0x4000770b40) Stream added, broadcasting: 5\nI0826 23:54:38.255362    2849 log.go:172] (0x4000b42420) Reply frame received for 5\nI0826 23:54:38.330317    2849 log.go:172] (0x4000b42420) Data frame received for 5\nI0826 23:54:38.330577    2849 log.go:172] (0x4000770b40) (5) Data frame handling\nI0826 23:54:38.331208    2849 log.go:172] (0x4000770b40) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0826 23:54:38.363296    2849 log.go:172] (0x4000b42420) Data frame received for 5\nI0826 23:54:38.363409    2849 log.go:172] (0x4000770b40) (5) Data frame handling\nI0826 23:54:38.363572    2849 log.go:172] (0x4000b42420) Data frame received for 3\nI0826 23:54:38.363705    2849 log.go:172] (0x4000770aa0) (3) Data frame handling\nI0826 23:54:38.363878    2849 log.go:172] (0x4000770aa0) (3) Data frame sent\nI0826 23:54:38.363976    2849 log.go:172] (0x4000b42420) Data frame received for 3\nI0826 23:54:38.364070    2849 log.go:172] (0x4000770aa0) (3) Data frame handling\nI0826 23:54:38.365378    2849 log.go:172] (0x4000b42420) Data frame received for 1\nI0826 23:54:38.365440    2849 log.go:172] (0x4000770a00) (1) Data frame handling\nI0826 23:54:38.365510    2849 log.go:172] (0x4000770a00) (1) Data frame sent\nI0826 23:54:38.367584    2849 log.go:172] (0x4000b42420) (0x4000770a00) Stream removed, broadcasting: 1\nI0826 23:54:38.371130    2849 log.go:172] (0x4000b42420) Go away received\nI0826 23:54:38.375022    2849 log.go:172] (0x4000b42420) (0x4000770a00) Stream removed, broadcasting: 1\nI0826 23:54:38.375569    2849 log.go:172] (0x4000b42420) (0x4000770aa0) Stream removed, broadcasting: 3\nI0826 23:54:38.376368    2849 log.go:172] (0x4000b42420) (0x4000770b40) Stream removed, broadcasting: 5\n"
Aug 26 23:54:38.391: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 26 23:54:38.392: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Aug 26 23:54:48.468: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Aug 26 23:54:58.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8129 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 26 23:55:00.035: INFO: stderr: "I0826 23:54:59.906353    2884 log.go:172] (0x40007e0790) (0x40009dc000) Create stream\nI0826 23:54:59.909851    2884 log.go:172] (0x40007e0790) (0x40009dc000) Stream added, broadcasting: 1\nI0826 23:54:59.918333    2884 log.go:172] (0x40007e0790) Reply frame received for 1\nI0826 23:54:59.918862    2884 log.go:172] (0x40007e0790) (0x40006ac0a0) Create stream\nI0826 23:54:59.918925    2884 log.go:172] (0x40007e0790) (0x40006ac0a0) Stream added, broadcasting: 3\nI0826 23:54:59.920256    2884 log.go:172] (0x40007e0790) Reply frame received for 3\nI0826 23:54:59.920467    2884 log.go:172] (0x40007e0790) (0x40006ac140) Create stream\nI0826 23:54:59.920517    2884 log.go:172] (0x40007e0790) (0x40006ac140) Stream added, broadcasting: 5\nI0826 23:54:59.921861    2884 log.go:172] (0x40007e0790) Reply frame received for 5\nI0826 23:55:00.012253    2884 log.go:172] (0x40007e0790) Data frame received for 3\nI0826 23:55:00.012564    2884 log.go:172] (0x40007e0790) Data frame received for 5\nI0826 23:55:00.013010    2884 log.go:172] (0x40007e0790) Data frame received for 1\nI0826 23:55:00.013149    2884 log.go:172] (0x40006ac0a0) (3) Data frame handling\nI0826 23:55:00.013287    2884 log.go:172] (0x40006ac140) (5) Data frame handling\nI0826 23:55:00.013518    2884 log.go:172] (0x40009dc000) (1) Data frame handling\nI0826 23:55:00.014101    2884 log.go:172] (0x40006ac140) (5) Data frame sent\nI0826 23:55:00.014224    2884 log.go:172] (0x40009dc000) (1) Data frame sent\nI0826 23:55:00.014480    2884 log.go:172] (0x40006ac0a0) (3) Data frame sent\nI0826 23:55:00.014700    2884 log.go:172] (0x40007e0790) Data frame received for 5\nI0826 23:55:00.014765    2884 log.go:172] (0x40006ac140) (5) Data frame handling\nI0826 23:55:00.015673    2884 log.go:172] (0x40007e0790) Data frame received for 3\nI0826 23:55:00.015732    2884 log.go:172] (0x40006ac0a0) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0826 23:55:00.017813    2884 log.go:172] (0x40007e0790) (0x40009dc000) Stream removed, broadcasting: 1\nI0826 23:55:00.019564    2884 log.go:172] (0x40007e0790) Go away received\nI0826 23:55:00.022939    2884 log.go:172] (0x40007e0790) (0x40009dc000) Stream removed, broadcasting: 1\nI0826 23:55:00.023205    2884 log.go:172] (0x40007e0790) (0x40006ac0a0) Stream removed, broadcasting: 3\nI0826 23:55:00.023428    2884 log.go:172] (0x40007e0790) (0x40006ac140) Stream removed, broadcasting: 5\n"
Aug 26 23:55:00.036: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 26 23:55:00.036: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 26 23:55:10.079: INFO: Waiting for StatefulSet statefulset-8129/ss2 to complete update
Aug 26 23:55:10.080: INFO: Waiting for Pod statefulset-8129/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 26 23:55:10.080: INFO: Waiting for Pod statefulset-8129/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 26 23:55:10.080: INFO: Waiting for Pod statefulset-8129/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 26 23:55:20.094: INFO: Waiting for StatefulSet statefulset-8129/ss2 to complete update
Aug 26 23:55:20.094: INFO: Waiting for Pod statefulset-8129/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 26 23:55:20.094: INFO: Waiting for Pod statefulset-8129/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 26 23:55:30.093: INFO: Waiting for StatefulSet statefulset-8129/ss2 to complete update
Aug 26 23:55:30.093: INFO: Waiting for Pod statefulset-8129/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Rolling back to a previous revision
Aug 26 23:55:40.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8129 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 26 23:55:42.104: INFO: stderr: "I0826 23:55:41.512627    2907 log.go:172] (0x40006e0370) (0x40004fa640) Create stream\nI0826 23:55:41.514994    2907 log.go:172] (0x40006e0370) (0x40004fa640) Stream added, broadcasting: 1\nI0826 23:55:41.523558    2907 log.go:172] (0x40006e0370) Reply frame received for 1\nI0826 23:55:41.524122    2907 log.go:172] (0x40006e0370) (0x4000598000) Create stream\nI0826 23:55:41.524194    2907 log.go:172] (0x40006e0370) (0x4000598000) Stream added, broadcasting: 3\nI0826 23:55:41.525969    2907 log.go:172] (0x40006e0370) Reply frame received for 3\nI0826 23:55:41.526486    2907 log.go:172] (0x40006e0370) (0x40004fa6e0) Create stream\nI0826 23:55:41.526613    2907 log.go:172] (0x40006e0370) (0x40004fa6e0) Stream added, broadcasting: 5\nI0826 23:55:41.528402    2907 log.go:172] (0x40006e0370) Reply frame received for 5\nI0826 23:55:41.602312    2907 log.go:172] (0x40006e0370) Data frame received for 5\nI0826 23:55:41.602616    2907 log.go:172] (0x40004fa6e0) (5) Data frame handling\nI0826 23:55:41.603325    2907 log.go:172] (0x40004fa6e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0826 23:55:42.084370    2907 log.go:172] (0x40006e0370) Data frame received for 3\nI0826 23:55:42.084521    2907 log.go:172] (0x4000598000) (3) Data frame handling\nI0826 23:55:42.084679    2907 log.go:172] (0x4000598000) (3) Data frame sent\nI0826 23:55:42.084875    2907 log.go:172] (0x40006e0370) Data frame received for 3\nI0826 23:55:42.085044    2907 log.go:172] (0x40006e0370) Data frame received for 5\nI0826 23:55:42.085298    2907 log.go:172] (0x40004fa6e0) (5) Data frame handling\nI0826 23:55:42.085507    2907 log.go:172] (0x4000598000) (3) Data frame handling\nI0826 23:55:42.086263    2907 log.go:172] (0x40006e0370) Data frame received for 1\nI0826 23:55:42.086413    2907 log.go:172] (0x40004fa640) (1) Data frame handling\nI0826 23:55:42.086552    2907 log.go:172] (0x40004fa640) (1) Data frame sent\nI0826 23:55:42.087614    2907 log.go:172] (0x40006e0370) (0x40004fa640) Stream removed, broadcasting: 1\nI0826 23:55:42.090271    2907 log.go:172] (0x40006e0370) Go away received\nI0826 23:55:42.094453    2907 log.go:172] (0x40006e0370) (0x40004fa640) Stream removed, broadcasting: 1\nI0826 23:55:42.094667    2907 log.go:172] (0x40006e0370) (0x4000598000) Stream removed, broadcasting: 3\nI0826 23:55:42.094838    2907 log.go:172] (0x40006e0370) (0x40004fa6e0) Stream removed, broadcasting: 5\n"
Aug 26 23:55:42.105: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 26 23:55:42.105: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 26 23:55:52.149: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Aug 26 23:56:02.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8129 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 26 23:56:04.253: INFO: stderr: "I0826 23:56:04.102815    2931 log.go:172] (0x40006e4420) (0x40005906e0) Create stream\nI0826 23:56:04.106611    2931 log.go:172] (0x40006e4420) (0x40005906e0) Stream added, broadcasting: 1\nI0826 23:56:04.121814    2931 log.go:172] (0x40006e4420) Reply frame received for 1\nI0826 23:56:04.122925    2931 log.go:172] (0x40006e4420) (0x40008a4000) Create stream\nI0826 23:56:04.123038    2931 log.go:172] (0x40006e4420) (0x40008a4000) Stream added, broadcasting: 3\nI0826 23:56:04.125251    2931 log.go:172] (0x40006e4420) Reply frame received for 3\nI0826 23:56:04.125713    2931 log.go:172] (0x40006e4420) (0x4000590780) Create stream\nI0826 23:56:04.125807    2931 log.go:172] (0x40006e4420) (0x4000590780) Stream added, broadcasting: 5\nI0826 23:56:04.127726    2931 log.go:172] (0x40006e4420) Reply frame received for 5\nI0826 23:56:04.224248    2931 log.go:172] (0x40006e4420) Data frame received for 5\nI0826 23:56:04.226338    2931 log.go:172] (0x40006e4420) Data frame received for 3\nI0826 23:56:04.226677    2931 log.go:172] (0x40008a4000) (3) Data frame handling\nI0826 23:56:04.227514    2931 log.go:172] (0x40008a4000) (3) Data frame sent\nI0826 23:56:04.227821    2931 log.go:172] (0x40006e4420) Data frame received for 3\nI0826 23:56:04.228019    2931 log.go:172] (0x40008a4000) (3) Data frame handling\nI0826 23:56:04.231032    2931 log.go:172] (0x4000590780) (5) Data frame handling\nI0826 23:56:04.232058    2931 log.go:172] (0x40006e4420) Data frame received for 1\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0826 23:56:04.232210    2931 log.go:172] (0x40005906e0) (1) Data frame handling\nI0826 23:56:04.232355    2931 log.go:172] (0x4000590780) (5) Data frame sent\nI0826 23:56:04.232468    2931 log.go:172] (0x40006e4420) Data frame received for 5\nI0826 23:56:04.232564    2931 log.go:172] (0x40005906e0) (1) Data frame sent\nI0826 23:56:04.232714    2931 log.go:172] (0x4000590780) (5) Data frame handling\nI0826 23:56:04.234277    2931 log.go:172] (0x40006e4420) (0x40005906e0) Stream removed, broadcasting: 1\nI0826 23:56:04.235026    2931 log.go:172] (0x40006e4420) Go away received\nI0826 23:56:04.239758    2931 log.go:172] (0x40006e4420) (0x40005906e0) Stream removed, broadcasting: 1\nI0826 23:56:04.240120    2931 log.go:172] (0x40006e4420) (0x40008a4000) Stream removed, broadcasting: 3\nI0826 23:56:04.241360    2931 log.go:172] (0x40006e4420) (0x4000590780) Stream removed, broadcasting: 5\n"
Aug 26 23:56:04.254: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 26 23:56:04.254: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 26 23:56:04.390: INFO: Waiting for StatefulSet statefulset-8129/ss2 to complete update
Aug 26 23:56:04.390: INFO: Waiting for Pod statefulset-8129/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Aug 26 23:56:04.390: INFO: Waiting for Pod statefulset-8129/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Aug 26 23:56:04.390: INFO: Waiting for Pod statefulset-8129/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Aug 26 23:56:14.400: INFO: Waiting for StatefulSet statefulset-8129/ss2 to complete update
Aug 26 23:56:14.401: INFO: Waiting for Pod statefulset-8129/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Aug 26 23:56:14.401: INFO: Waiting for Pod statefulset-8129/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Aug 26 23:56:24.539: INFO: Waiting for StatefulSet statefulset-8129/ss2 to complete update
Aug 26 23:56:24.539: INFO: Waiting for Pod statefulset-8129/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Aug 26 23:56:34.585: INFO: Waiting for StatefulSet statefulset-8129/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Aug 26 23:56:44.407: INFO: Deleting all statefulset in ns statefulset-8129
Aug 26 23:56:44.412: INFO: Scaling statefulset ss2 to 0
Aug 26 23:57:04.442: INFO: Waiting for statefulset status.replicas updated to 0
Aug 26 23:57:04.448: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 26 23:57:04.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8129" for this suite.
Aug 26 23:57:12.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 26 23:57:12.650: INFO: namespace statefulset-8129 deletion completed in 8.170080931s

• [SLOW TEST:178.466 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 26 23:57:12.652: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug 26 23:57:25.360: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 26 23:57:25.388: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 26 23:57:27.391: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 26 23:57:27.396: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 26 23:57:29.389: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 26 23:57:29.396: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 26 23:57:31.389: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 26 23:57:31.395: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 26 23:57:33.389: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 26 23:57:33.397: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 26 23:57:35.389: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 26 23:57:35.396: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 26 23:57:37.389: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 26 23:57:37.394: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 26 23:57:39.389: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 26 23:57:39.398: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 26 23:57:41.389: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 26 23:57:41.459: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 26 23:57:43.389: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 26 23:57:43.397: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 26 23:57:45.389: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 26 23:57:45.396: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 26 23:57:45.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9822" for this suite.
Aug 26 23:58:09.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 26 23:58:09.527: INFO: namespace container-lifecycle-hook-9822 deletion completed in 24.122672828s

• [SLOW TEST:56.875 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 26 23:58:09.531: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 26 23:58:10.006: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a404153d-f4f4-46da-ace5-2b430c0e85db" in namespace "downward-api-3786" to be "success or failure"
Aug 26 23:58:10.078: INFO: Pod "downwardapi-volume-a404153d-f4f4-46da-ace5-2b430c0e85db": Phase="Pending", Reason="", readiness=false. Elapsed: 71.479133ms
Aug 26 23:58:12.084: INFO: Pod "downwardapi-volume-a404153d-f4f4-46da-ace5-2b430c0e85db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077383375s
Aug 26 23:58:14.160: INFO: Pod "downwardapi-volume-a404153d-f4f4-46da-ace5-2b430c0e85db": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153672594s
Aug 26 23:58:16.168: INFO: Pod "downwardapi-volume-a404153d-f4f4-46da-ace5-2b430c0e85db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.161309702s
STEP: Saw pod success
Aug 26 23:58:16.168: INFO: Pod "downwardapi-volume-a404153d-f4f4-46da-ace5-2b430c0e85db" satisfied condition "success or failure"
Aug 26 23:58:16.173: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-a404153d-f4f4-46da-ace5-2b430c0e85db container client-container: 
STEP: delete the pod
Aug 26 23:58:16.268: INFO: Waiting for pod downwardapi-volume-a404153d-f4f4-46da-ace5-2b430c0e85db to disappear
Aug 26 23:58:16.273: INFO: Pod downwardapi-volume-a404153d-f4f4-46da-ace5-2b430c0e85db no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 26 23:58:16.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3786" for this suite.
Aug 26 23:58:22.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 26 23:58:22.520: INFO: namespace downward-api-3786 deletion completed in 6.237583389s

• [SLOW TEST:12.990 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 26 23:58:22.521: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 26 23:58:26.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8883" for this suite.
Aug 26 23:59:04.900: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 26 23:59:05.038: INFO: namespace kubelet-test-8883 deletion completed in 38.176284436s

• [SLOW TEST:42.517 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 26 23:59:05.039: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-756e7aba-d70d-420c-a3e2-fd431069560d
STEP: Creating a pod to test consume secrets
Aug 26 23:59:05.157: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e336b1b4-5235-4b15-8d2b-71b2fb215378" in namespace "projected-3405" to be "success or failure"
Aug 26 23:59:05.201: INFO: Pod "pod-projected-secrets-e336b1b4-5235-4b15-8d2b-71b2fb215378": Phase="Pending", Reason="", readiness=false. Elapsed: 44.521382ms
Aug 26 23:59:07.207: INFO: Pod "pod-projected-secrets-e336b1b4-5235-4b15-8d2b-71b2fb215378": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050205478s
Aug 26 23:59:09.215: INFO: Pod "pod-projected-secrets-e336b1b4-5235-4b15-8d2b-71b2fb215378": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057799052s
STEP: Saw pod success
Aug 26 23:59:09.215: INFO: Pod "pod-projected-secrets-e336b1b4-5235-4b15-8d2b-71b2fb215378" satisfied condition "success or failure"
Aug 26 23:59:09.227: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-e336b1b4-5235-4b15-8d2b-71b2fb215378 container projected-secret-volume-test: 
STEP: delete the pod
Aug 26 23:59:09.263: INFO: Waiting for pod pod-projected-secrets-e336b1b4-5235-4b15-8d2b-71b2fb215378 to disappear
Aug 26 23:59:09.271: INFO: Pod pod-projected-secrets-e336b1b4-5235-4b15-8d2b-71b2fb215378 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 26 23:59:09.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3405" for this suite.
Aug 26 23:59:15.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 26 23:59:15.451: INFO: namespace projected-3405 deletion completed in 6.171475954s

• [SLOW TEST:10.412 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 26 23:59:15.454: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Aug 26 23:59:15.577: INFO: Waiting up to 5m0s for pod "client-containers-c6f01751-f81c-4187-870d-ad7c9501cfe1" in namespace "containers-8634" to be "success or failure"
Aug 26 23:59:15.587: INFO: Pod "client-containers-c6f01751-f81c-4187-870d-ad7c9501cfe1": Phase="Pending", Reason="", readiness=false. Elapsed: 9.633707ms
Aug 26 23:59:17.736: INFO: Pod "client-containers-c6f01751-f81c-4187-870d-ad7c9501cfe1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.159063424s
Aug 26 23:59:19.744: INFO: Pod "client-containers-c6f01751-f81c-4187-870d-ad7c9501cfe1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.16650231s
STEP: Saw pod success
Aug 26 23:59:19.744: INFO: Pod "client-containers-c6f01751-f81c-4187-870d-ad7c9501cfe1" satisfied condition "success or failure"
Aug 26 23:59:19.749: INFO: Trying to get logs from node iruya-worker pod client-containers-c6f01751-f81c-4187-870d-ad7c9501cfe1 container test-container: 
STEP: delete the pod
Aug 26 23:59:19.774: INFO: Waiting for pod client-containers-c6f01751-f81c-4187-870d-ad7c9501cfe1 to disappear
Aug 26 23:59:19.778: INFO: Pod client-containers-c6f01751-f81c-4187-870d-ad7c9501cfe1 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 26 23:59:19.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8634" for this suite.
Aug 26 23:59:25.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 26 23:59:25.978: INFO: namespace containers-8634 deletion completed in 6.192158672s

• [SLOW TEST:10.524 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 26 23:59:25.979: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-69j9q in namespace proxy-3360
I0826 23:59:26.162687       7 runners.go:180] Created replication controller with name: proxy-service-69j9q, namespace: proxy-3360, replica count: 1
I0826 23:59:27.214243       7 runners.go:180] proxy-service-69j9q Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 23:59:28.214877       7 runners.go:180] proxy-service-69j9q Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 23:59:29.215751       7 runners.go:180] proxy-service-69j9q Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0826 23:59:30.216433       7 runners.go:180] proxy-service-69j9q Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0826 23:59:31.217204       7 runners.go:180] proxy-service-69j9q Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0826 23:59:32.217949       7 runners.go:180] proxy-service-69j9q Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0826 23:59:33.218635       7 runners.go:180] proxy-service-69j9q Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0826 23:59:34.219378       7 runners.go:180] proxy-service-69j9q Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0826 23:59:35.220015       7 runners.go:180] proxy-service-69j9q Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0826 23:59:36.220701       7 runners.go:180] proxy-service-69j9q Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0826 23:59:37.221546       7 runners.go:180] proxy-service-69j9q Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 26 23:59:37.235: INFO: setup took 11.164269047s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Aug 26 23:59:37.243: INFO: (0) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-69j9q-br9bb:162/proxy/: bar (200; 6.575461ms)
Aug 26 23:59:37.246: INFO: (0) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb:160/proxy/: foo (200; 9.406055ms)
Aug 26 23:59:37.249: INFO: (0) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-69j9q-br9bb:160/proxy/: foo (200; 11.845926ms)
Aug 26 23:59:37.249: INFO: (0) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb/proxy/: test (200; 11.010882ms)
Aug 26 23:59:37.249: INFO: (0) /api/v1/namespaces/proxy-3360/services/proxy-service-69j9q:portname2/proxy/: bar (200; 11.985327ms)
Aug 26 23:59:37.249: INFO: (0) /api/v1/namespaces/proxy-3360/services/http:proxy-service-69j9q:portname1/proxy/: foo (200; 11.171601ms)
Aug 26 23:59:37.249: INFO: (0) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb:162/proxy/: bar (200; 12.377487ms)
Aug 26 23:59:37.249: INFO: (0) /api/v1/namespaces/proxy-3360/services/http:proxy-service-69j9q:portname2/proxy/: bar (200; 12.246048ms)
Aug 26 23:59:37.249: INFO: (0) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb:1080/proxy/: test<... (200; 11.562211ms)
Aug 26 23:59:37.249: INFO: (0) /api/v1/namespaces/proxy-3360/services/proxy-service-69j9q:portname1/proxy/: foo (200; 11.021204ms)
Aug 26 23:59:37.249: INFO: (0) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-69j9q-br9bb:1080/proxy/: ... (200; 11.387773ms)
Aug 26 23:59:37.250: INFO: (0) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:460/proxy/: tls baz (200; 12.707492ms)
Aug 26 23:59:37.250: INFO: (0) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:443/proxy/: ... (200; 4.793964ms)
Aug 26 23:59:37.258: INFO: (1) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:443/proxy/: test<... (200; 5.215284ms)
Aug 26 23:59:37.258: INFO: (1) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-69j9q-br9bb:160/proxy/: foo (200; 5.312264ms)
Aug 26 23:59:37.259: INFO: (1) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb:160/proxy/: foo (200; 5.344688ms)
Aug 26 23:59:37.259: INFO: (1) /api/v1/namespaces/proxy-3360/services/http:proxy-service-69j9q:portname2/proxy/: bar (200; 5.628968ms)
Aug 26 23:59:37.259: INFO: (1) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-69j9q-br9bb:162/proxy/: bar (200; 5.605906ms)
Aug 26 23:59:37.259: INFO: (1) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb/proxy/: test (200; 5.665389ms)
Aug 26 23:59:37.259: INFO: (1) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb:162/proxy/: bar (200; 5.958248ms)
Aug 26 23:59:37.259: INFO: (1) /api/v1/namespaces/proxy-3360/services/http:proxy-service-69j9q:portname1/proxy/: foo (200; 6.069406ms)
Aug 26 23:59:37.260: INFO: (1) /api/v1/namespaces/proxy-3360/services/https:proxy-service-69j9q:tlsportname1/proxy/: tls baz (200; 6.424818ms)
Aug 26 23:59:37.260: INFO: (1) /api/v1/namespaces/proxy-3360/services/https:proxy-service-69j9q:tlsportname2/proxy/: tls qux (200; 6.73242ms)
Aug 26 23:59:37.260: INFO: (1) /api/v1/namespaces/proxy-3360/services/proxy-service-69j9q:portname2/proxy/: bar (200; 6.834673ms)
Aug 26 23:59:37.260: INFO: (1) /api/v1/namespaces/proxy-3360/services/proxy-service-69j9q:portname1/proxy/: foo (200; 6.969443ms)
Aug 26 23:59:37.265: INFO: (2) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-69j9q-br9bb:160/proxy/: foo (200; 4.023857ms)
Aug 26 23:59:37.265: INFO: (2) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb:1080/proxy/: test<... (200; 4.121056ms)
Aug 26 23:59:37.265: INFO: (2) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-69j9q-br9bb:162/proxy/: bar (200; 4.240345ms)
Aug 26 23:59:37.268: INFO: (2) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb:160/proxy/: foo (200; 7.726268ms)
Aug 26 23:59:37.269: INFO: (2) /api/v1/namespaces/proxy-3360/services/http:proxy-service-69j9q:portname2/proxy/: bar (200; 7.7083ms)
Aug 26 23:59:37.269: INFO: (2) /api/v1/namespaces/proxy-3360/services/proxy-service-69j9q:portname2/proxy/: bar (200; 8.082299ms)
Aug 26 23:59:37.269: INFO: (2) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb/proxy/: test (200; 7.855086ms)
Aug 26 23:59:37.269: INFO: (2) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:460/proxy/: tls baz (200; 7.994452ms)
Aug 26 23:59:37.269: INFO: (2) /api/v1/namespaces/proxy-3360/services/https:proxy-service-69j9q:tlsportname1/proxy/: tls baz (200; 7.977081ms)
Aug 26 23:59:37.269: INFO: (2) /api/v1/namespaces/proxy-3360/services/http:proxy-service-69j9q:portname1/proxy/: foo (200; 8.400446ms)
Aug 26 23:59:37.269: INFO: (2) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:462/proxy/: tls qux (200; 8.449943ms)
Aug 26 23:59:37.269: INFO: (2) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:443/proxy/: ... (200; 8.955789ms)
Aug 26 23:59:37.270: INFO: (2) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb:162/proxy/: bar (200; 8.975719ms)
Aug 26 23:59:37.270: INFO: (2) /api/v1/namespaces/proxy-3360/services/https:proxy-service-69j9q:tlsportname2/proxy/: tls qux (200; 8.978477ms)
Aug 26 23:59:37.270: INFO: (2) /api/v1/namespaces/proxy-3360/services/proxy-service-69j9q:portname1/proxy/: foo (200; 9.447616ms)
Aug 26 23:59:37.275: INFO: (3) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb:1080/proxy/: test<... (200; 4.728901ms)
Aug 26 23:59:37.275: INFO: (3) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb:160/proxy/: foo (200; 4.62409ms)
Aug 26 23:59:37.276: INFO: (3) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-69j9q-br9bb:1080/proxy/: ... (200; 5.232321ms)
Aug 26 23:59:37.276: INFO: (3) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:462/proxy/: tls qux (200; 5.118365ms)
Aug 26 23:59:37.276: INFO: (3) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-69j9q-br9bb:160/proxy/: foo (200; 5.312613ms)
Aug 26 23:59:37.276: INFO: (3) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:460/proxy/: tls baz (200; 5.611063ms)
Aug 26 23:59:37.276: INFO: (3) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-69j9q-br9bb:162/proxy/: bar (200; 5.935638ms)
Aug 26 23:59:37.277: INFO: (3) /api/v1/namespaces/proxy-3360/services/https:proxy-service-69j9q:tlsportname1/proxy/: tls baz (200; 6.12484ms)
Aug 26 23:59:37.277: INFO: (3) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb:162/proxy/: bar (200; 6.146065ms)
Aug 26 23:59:37.277: INFO: (3) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:443/proxy/: test (200; 6.494847ms)
Aug 26 23:59:37.277: INFO: (3) /api/v1/namespaces/proxy-3360/services/proxy-service-69j9q:portname2/proxy/: bar (200; 6.937706ms)
Aug 26 23:59:37.277: INFO: (3) /api/v1/namespaces/proxy-3360/services/http:proxy-service-69j9q:portname2/proxy/: bar (200; 7.050776ms)
Aug 26 23:59:37.278: INFO: (3) /api/v1/namespaces/proxy-3360/services/https:proxy-service-69j9q:tlsportname2/proxy/: tls qux (200; 7.12351ms)
Aug 26 23:59:37.278: INFO: (3) /api/v1/namespaces/proxy-3360/services/proxy-service-69j9q:portname1/proxy/: foo (200; 7.012953ms)
Aug 26 23:59:37.278: INFO: (3) /api/v1/namespaces/proxy-3360/services/http:proxy-service-69j9q:portname1/proxy/: foo (200; 7.087637ms)
Aug 26 23:59:37.282: INFO: (4) /api/v1/namespaces/proxy-3360/services/https:proxy-service-69j9q:tlsportname2/proxy/: tls qux (200; 4.636054ms)
Aug 26 23:59:37.283: INFO: (4) /api/v1/namespaces/proxy-3360/services/https:proxy-service-69j9q:tlsportname1/proxy/: tls baz (200; 4.779958ms)
Aug 26 23:59:37.283: INFO: (4) /api/v1/namespaces/proxy-3360/services/http:proxy-service-69j9q:portname2/proxy/: bar (200; 4.469626ms)
Aug 26 23:59:37.283: INFO: (4) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-69j9q-br9bb:1080/proxy/: ... (200; 4.163022ms)
Aug 26 23:59:37.284: INFO: (4) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:443/proxy/: test<... (200; 4.335074ms)
Aug 26 23:59:37.285: INFO: (4) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb:160/proxy/: foo (200; 4.388601ms)
Aug 26 23:59:37.285: INFO: (4) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:460/proxy/: tls baz (200; 4.039792ms)
Aug 26 23:59:37.285: INFO: (4) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:462/proxy/: tls qux (200; 4.328775ms)
Aug 26 23:59:37.285: INFO: (4) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb/proxy/: test (200; 5.862389ms)
Aug 26 23:59:37.286: INFO: (4) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb:162/proxy/: bar (200; 5.819067ms)
Aug 26 23:59:37.286: INFO: (4) /api/v1/namespaces/proxy-3360/services/proxy-service-69j9q:portname1/proxy/: foo (200; 5.216138ms)
Aug 26 23:59:37.286: INFO: (4) /api/v1/namespaces/proxy-3360/services/http:proxy-service-69j9q:portname1/proxy/: foo (200; 6.385034ms)
Aug 26 23:59:37.287: INFO: (4) /api/v1/namespaces/proxy-3360/services/proxy-service-69j9q:portname2/proxy/: bar (200; 6.899039ms)
Aug 26 23:59:37.291: INFO: (5) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-69j9q-br9bb:1080/proxy/: ... (200; 3.868452ms)
Aug 26 23:59:37.291: INFO: (5) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb:162/proxy/: bar (200; 3.967004ms)
Aug 26 23:59:37.291: INFO: (5) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb/proxy/: test (200; 3.94873ms)
Aug 26 23:59:37.291: INFO: (5) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:460/proxy/: tls baz (200; 3.860176ms)
Aug 26 23:59:37.291: INFO: (5) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-69j9q-br9bb:160/proxy/: foo (200; 4.090014ms)
Aug 26 23:59:37.293: INFO: (5) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-69j9q-br9bb:162/proxy/: bar (200; 5.820898ms)
Aug 26 23:59:37.293: INFO: (5) /api/v1/namespaces/proxy-3360/services/proxy-service-69j9q:portname2/proxy/: bar (200; 5.958186ms)
Aug 26 23:59:37.293: INFO: (5) /api/v1/namespaces/proxy-3360/services/proxy-service-69j9q:portname1/proxy/: foo (200; 5.921171ms)
Aug 26 23:59:37.293: INFO: (5) /api/v1/namespaces/proxy-3360/services/http:proxy-service-69j9q:portname1/proxy/: foo (200; 6.23992ms)
Aug 26 23:59:37.293: INFO: (5) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:462/proxy/: tls qux (200; 6.180043ms)
Aug 26 23:59:37.293: INFO: (5) /api/v1/namespaces/proxy-3360/services/https:proxy-service-69j9q:tlsportname2/proxy/: tls qux (200; 6.249177ms)
Aug 26 23:59:37.294: INFO: (5) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb:160/proxy/: foo (200; 6.567736ms)
Aug 26 23:59:37.295: INFO: (5) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:443/proxy/: test<... (200; 7.785843ms)
Aug 26 23:59:37.295: INFO: (5) /api/v1/namespaces/proxy-3360/services/https:proxy-service-69j9q:tlsportname1/proxy/: tls baz (200; 7.929385ms)
Aug 26 23:59:37.299: INFO: (6) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-69j9q-br9bb:1080/proxy/: ... (200; 4.115358ms)
Aug 26 23:59:37.299: INFO: (6) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-69j9q-br9bb:160/proxy/: foo (200; 4.332492ms)
Aug 26 23:59:37.300: INFO: (6) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:462/proxy/: tls qux (200; 4.628393ms)
Aug 26 23:59:37.300: INFO: (6) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb/proxy/: test (200; 5.124213ms)
Aug 26 23:59:37.301: INFO: (6) /api/v1/namespaces/proxy-3360/services/https:proxy-service-69j9q:tlsportname1/proxy/: tls baz (200; 5.320217ms)
Aug 26 23:59:37.301: INFO: (6) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:443/proxy/: test<... (200; 6.522158ms)
Aug 26 23:59:37.302: INFO: (6) /api/v1/namespaces/proxy-3360/services/https:proxy-service-69j9q:tlsportname2/proxy/: tls qux (200; 6.899119ms)
Aug 26 23:59:37.303: INFO: (6) /api/v1/namespaces/proxy-3360/services/proxy-service-69j9q:portname1/proxy/: foo (200; 7.639342ms)
Aug 26 23:59:37.307: INFO: (7) /api/v1/namespaces/proxy-3360/services/proxy-service-69j9q:portname1/proxy/: foo (200; 3.786007ms)
Aug 26 23:59:37.308: INFO: (7) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:462/proxy/: tls qux (200; 4.225918ms)
Aug 26 23:59:37.308: INFO: (7) /api/v1/namespaces/proxy-3360/services/https:proxy-service-69j9q:tlsportname1/proxy/: tls baz (200; 4.173035ms)
Aug 26 23:59:37.308: INFO: (7) /api/v1/namespaces/proxy-3360/services/http:proxy-service-69j9q:portname1/proxy/: foo (200; 4.579394ms)
Aug 26 23:59:37.308: INFO: (7) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-69j9q-br9bb:1080/proxy/: ... (200; 5.059636ms)
Aug 26 23:59:37.309: INFO: (7) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb:1080/proxy/: test<... (200; 5.209318ms)
Aug 26 23:59:37.309: INFO: (7) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb/proxy/: test (200; 5.190402ms)
Aug 26 23:59:37.309: INFO: (7) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:443/proxy/: ... (200; 5.392643ms)
Aug 26 23:59:37.316: INFO: (8) /api/v1/namespaces/proxy-3360/services/http:proxy-service-69j9q:portname1/proxy/: foo (200; 5.749525ms)
Aug 26 23:59:37.316: INFO: (8) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb:1080/proxy/: test<... (200; 5.622671ms)
Aug 26 23:59:37.317: INFO: (8) /api/v1/namespaces/proxy-3360/services/proxy-service-69j9q:portname1/proxy/: foo (200; 5.869338ms)
Aug 26 23:59:37.317: INFO: (8) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:443/proxy/: test (200; 6.4511ms)
Aug 26 23:59:37.317: INFO: (8) /api/v1/namespaces/proxy-3360/services/https:proxy-service-69j9q:tlsportname2/proxy/: tls qux (200; 6.171185ms)
Aug 26 23:59:37.321: INFO: (9) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb:160/proxy/: foo (200; 3.360246ms)
Aug 26 23:59:37.321: INFO: (9) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb:1080/proxy/: test<... (200; 3.556136ms)
Aug 26 23:59:37.321: INFO: (9) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:460/proxy/: tls baz (200; 3.756507ms)
Aug 26 23:59:37.322: INFO: (9) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-69j9q-br9bb:160/proxy/: foo (200; 4.564913ms)
Aug 26 23:59:37.322: INFO: (9) /api/v1/namespaces/proxy-3360/services/proxy-service-69j9q:portname1/proxy/: foo (200; 4.764407ms)
Aug 26 23:59:37.322: INFO: (9) /api/v1/namespaces/proxy-3360/services/https:proxy-service-69j9q:tlsportname1/proxy/: tls baz (200; 4.728775ms)
Aug 26 23:59:37.322: INFO: (9) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb:162/proxy/: bar (200; 4.607336ms)
Aug 26 23:59:37.322: INFO: (9) /api/v1/namespaces/proxy-3360/services/http:proxy-service-69j9q:portname1/proxy/: foo (200; 4.858034ms)
Aug 26 23:59:37.324: INFO: (9) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-69j9q-br9bb:1080/proxy/: ... (200; 6.68486ms)
Aug 26 23:59:37.324: INFO: (9) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb/proxy/: test (200; 6.714043ms)
Aug 26 23:59:37.325: INFO: (9) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:443/proxy/: test<... (200; 4.06018ms)
Aug 26 23:59:37.330: INFO: (10) /api/v1/namespaces/proxy-3360/services/http:proxy-service-69j9q:portname1/proxy/: foo (200; 5.042793ms)
Aug 26 23:59:37.331: INFO: (10) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:443/proxy/: test (200; 5.680102ms)
Aug 26 23:59:37.331: INFO: (10) /api/v1/namespaces/proxy-3360/services/proxy-service-69j9q:portname2/proxy/: bar (200; 6.097842ms)
Aug 26 23:59:37.332: INFO: (10) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:460/proxy/: tls baz (200; 6.248166ms)
Aug 26 23:59:37.332: INFO: (10) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-69j9q-br9bb:160/proxy/: foo (200; 6.013312ms)
Aug 26 23:59:37.332: INFO: (10) /api/v1/namespaces/proxy-3360/services/http:proxy-service-69j9q:portname2/proxy/: bar (200; 6.145207ms)
Aug 26 23:59:37.332: INFO: (10) /api/v1/namespaces/proxy-3360/services/https:proxy-service-69j9q:tlsportname2/proxy/: tls qux (200; 6.424463ms)
Aug 26 23:59:37.332: INFO: (10) /api/v1/namespaces/proxy-3360/services/https:proxy-service-69j9q:tlsportname1/proxy/: tls baz (200; 6.696045ms)
Aug 26 23:59:37.332: INFO: (10) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb:162/proxy/: bar (200; 6.678374ms)
Aug 26 23:59:37.332: INFO: (10) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-69j9q-br9bb:1080/proxy/: ... (200; 6.783827ms)
Aug 26 23:59:37.333: INFO: (10) /api/v1/namespaces/proxy-3360/services/proxy-service-69j9q:portname1/proxy/: foo (200; 7.204535ms)
Aug 26 23:59:37.336: INFO: (11) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb:162/proxy/: bar (200; 3.018246ms)
Aug 26 23:59:37.336: INFO: (11) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb:160/proxy/: foo (200; 3.223192ms)
Aug 26 23:59:37.337: INFO: (11) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:462/proxy/: tls qux (200; 3.603955ms)
Aug 26 23:59:37.337: INFO: (11) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-69j9q-br9bb:1080/proxy/: ... (200; 4.367119ms)
Aug 26 23:59:37.337: INFO: (11) /api/v1/namespaces/proxy-3360/services/https:proxy-service-69j9q:tlsportname1/proxy/: tls baz (200; 4.516517ms)
Aug 26 23:59:37.338: INFO: (11) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-69j9q-br9bb:160/proxy/: foo (200; 4.863925ms)
Aug 26 23:59:37.338: INFO: (11) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-69j9q-br9bb:162/proxy/: bar (200; 5.097721ms)
Aug 26 23:59:37.338: INFO: (11) /api/v1/namespaces/proxy-3360/services/https:proxy-service-69j9q:tlsportname2/proxy/: tls qux (200; 5.347714ms)
Aug 26 23:59:37.338: INFO: (11) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:443/proxy/: test (200; 5.81199ms)
Aug 26 23:59:37.339: INFO: (11) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb:1080/proxy/: test<... (200; 6.561934ms)
Aug 26 23:59:37.339: INFO: (11) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:460/proxy/: tls baz (200; 6.505206ms)
Aug 26 23:59:37.340: INFO: (11) /api/v1/namespaces/proxy-3360/services/proxy-service-69j9q:portname2/proxy/: bar (200; 6.662305ms)
Aug 26 23:59:37.340: INFO: (11) /api/v1/namespaces/proxy-3360/services/http:proxy-service-69j9q:portname1/proxy/: foo (200; 6.779415ms)
Aug 26 23:59:37.343: INFO: (12) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:443/proxy/: test<... (200; 5.628551ms)
Aug 26 23:59:37.346: INFO: (12) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-69j9q-br9bb:1080/proxy/: ... (200; 5.895409ms)
Aug 26 23:59:37.346: INFO: (12) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb:162/proxy/: bar (200; 6.004738ms)
Aug 26 23:59:37.346: INFO: (12) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb/proxy/: test (200; 5.709804ms)
Aug 26 23:59:37.346: INFO: (12) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-69j9q-br9bb:160/proxy/: foo (200; 5.810308ms)
Aug 26 23:59:37.346: INFO: (12) /api/v1/namespaces/proxy-3360/services/https:proxy-service-69j9q:tlsportname2/proxy/: tls qux (200; 6.116989ms)
Aug 26 23:59:37.346: INFO: (12) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-69j9q-br9bb:162/proxy/: bar (200; 6.41512ms)
Aug 26 23:59:37.347: INFO: (12) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb:160/proxy/: foo (200; 6.306667ms)
Aug 26 23:59:37.347: INFO: (12) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:460/proxy/: tls baz (200; 6.363726ms)
Aug 26 23:59:37.347: INFO: (12) /api/v1/namespaces/proxy-3360/services/http:proxy-service-69j9q:portname1/proxy/: foo (200; 6.739503ms)
Aug 26 23:59:37.347: INFO: (12) /api/v1/namespaces/proxy-3360/services/proxy-service-69j9q:portname1/proxy/: foo (200; 6.476861ms)
Aug 26 23:59:37.347: INFO: (12) /api/v1/namespaces/proxy-3360/services/proxy-service-69j9q:portname2/proxy/: bar (200; 6.646615ms)
Aug 26 23:59:37.347: INFO: (12) /api/v1/namespaces/proxy-3360/services/http:proxy-service-69j9q:portname2/proxy/: bar (200; 7.040469ms)
Aug 26 23:59:37.347: INFO: (12) /api/v1/namespaces/proxy-3360/services/https:proxy-service-69j9q:tlsportname1/proxy/: tls baz (200; 6.927581ms)
Aug 26 23:59:37.347: INFO: (12) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:462/proxy/: tls qux (200; 6.969006ms)
Aug 26 23:59:37.352: INFO: (13) /api/v1/namespaces/proxy-3360/services/http:proxy-service-69j9q:portname1/proxy/: foo (200; 3.801889ms)
Aug 26 23:59:37.352: INFO: (13) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-69j9q-br9bb:1080/proxy/: ... (200; 4.312257ms)
Aug 26 23:59:37.353: INFO: (13) /api/v1/namespaces/proxy-3360/services/http:proxy-service-69j9q:portname2/proxy/: bar (200; 5.650891ms)
Aug 26 23:59:37.354: INFO: (13) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-69j9q-br9bb:160/proxy/: foo (200; 4.62419ms)
Aug 26 23:59:37.354: INFO: (13) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb:162/proxy/: bar (200; 4.574098ms)
Aug 26 23:59:37.354: INFO: (13) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb/proxy/: test (200; 5.24212ms)
Aug 26 23:59:37.354: INFO: (13) /api/v1/namespaces/proxy-3360/services/proxy-service-69j9q:portname2/proxy/: bar (200; 5.685768ms)
Aug 26 23:59:37.354: INFO: (13) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:443/proxy/: test<... (200; 4.68313ms)
Aug 26 23:59:37.355: INFO: (13) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:462/proxy/: tls qux (200; 4.882621ms)
Aug 26 23:59:37.355: INFO: (13) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb:160/proxy/: foo (200; 5.650669ms)
Aug 26 23:59:37.356: INFO: (13) /api/v1/namespaces/proxy-3360/services/https:proxy-service-69j9q:tlsportname2/proxy/: tls qux (200; 5.543577ms)
Aug 26 23:59:37.356: INFO: (13) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-69j9q-br9bb:162/proxy/: bar (200; 5.32857ms)
Aug 26 23:59:37.356: INFO: (13) /api/v1/namespaces/proxy-3360/services/https:proxy-service-69j9q:tlsportname1/proxy/: tls baz (200; 5.728749ms)
Aug 26 23:59:37.361: INFO: (14) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb:160/proxy/: foo (200; 4.455082ms)
Aug 26 23:59:37.361: INFO: (14) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-69j9q-br9bb:160/proxy/: foo (200; 4.892966ms)
Aug 26 23:59:37.361: INFO: (14) /api/v1/namespaces/proxy-3360/services/https:proxy-service-69j9q:tlsportname2/proxy/: tls qux (200; 4.899408ms)
Aug 26 23:59:37.362: INFO: (14) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-69j9q-br9bb:162/proxy/: bar (200; 5.1843ms)
Aug 26 23:59:37.362: INFO: (14) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb/proxy/: test (200; 5.418126ms)
Aug 26 23:59:37.362: INFO: (14) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-69j9q-br9bb:1080/proxy/: ... (200; 5.546047ms)
Aug 26 23:59:37.362: INFO: (14) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb:162/proxy/: bar (200; 5.732962ms)
Aug 26 23:59:37.362: INFO: (14) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb:1080/proxy/: test<... (200; 5.641473ms)
Aug 26 23:59:37.362: INFO: (14) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:443/proxy/: ... (200; 5.815562ms)
Aug 26 23:59:37.370: INFO: (15) /api/v1/namespaces/proxy-3360/services/http:proxy-service-69j9q:portname2/proxy/: bar (200; 6.376903ms)
Aug 26 23:59:37.370: INFO: (15) /api/v1/namespaces/proxy-3360/services/http:proxy-service-69j9q:portname1/proxy/: foo (200; 6.501537ms)
Aug 26 23:59:37.370: INFO: (15) /api/v1/namespaces/proxy-3360/services/proxy-service-69j9q:portname2/proxy/: bar (200; 6.551174ms)
Aug 26 23:59:37.370: INFO: (15) /api/v1/namespaces/proxy-3360/services/https:proxy-service-69j9q:tlsportname2/proxy/: tls qux (200; 6.803492ms)
Aug 26 23:59:37.370: INFO: (15) /api/v1/namespaces/proxy-3360/services/proxy-service-69j9q:portname1/proxy/: foo (200; 6.81325ms)
Aug 26 23:59:37.370: INFO: (15) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb:1080/proxy/: test<... (200; 6.801092ms)
Aug 26 23:59:37.370: INFO: (15) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:462/proxy/: tls qux (200; 6.92849ms)
Aug 26 23:59:37.371: INFO: (15) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb/proxy/: test (200; 7.127145ms)
Aug 26 23:59:37.374: INFO: (16) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:460/proxy/: tls baz (200; 2.993115ms)
Aug 26 23:59:37.374: INFO: (16) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-69j9q-br9bb:1080/proxy/: ... (200; 3.175253ms)
Aug 26 23:59:37.375: INFO: (16) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:443/proxy/: test<... (200; 5.35638ms)
Aug 26 23:59:37.376: INFO: (16) /api/v1/namespaces/proxy-3360/services/http:proxy-service-69j9q:portname1/proxy/: foo (200; 5.55849ms)
Aug 26 23:59:37.376: INFO: (16) /api/v1/namespaces/proxy-3360/services/https:proxy-service-69j9q:tlsportname1/proxy/: tls baz (200; 5.695272ms)
Aug 26 23:59:37.377: INFO: (16) /api/v1/namespaces/proxy-3360/services/https:proxy-service-69j9q:tlsportname2/proxy/: tls qux (200; 6.01929ms)
Aug 26 23:59:37.377: INFO: (16) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb:162/proxy/: bar (200; 6.139684ms)
Aug 26 23:59:37.377: INFO: (16) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb/proxy/: test (200; 6.410759ms)
Aug 26 23:59:37.378: INFO: (16) /api/v1/namespaces/proxy-3360/services/proxy-service-69j9q:portname2/proxy/: bar (200; 6.49787ms)
Aug 26 23:59:37.378: INFO: (16) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-69j9q-br9bb:162/proxy/: bar (200; 6.665613ms)
Aug 26 23:59:37.378: INFO: (16) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:462/proxy/: tls qux (200; 6.714454ms)
Aug 26 23:59:37.383: INFO: (17) /api/v1/namespaces/proxy-3360/services/proxy-service-69j9q:portname2/proxy/: bar (200; 5.200903ms)
Aug 26 23:59:37.384: INFO: (17) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb/proxy/: test (200; 5.993934ms)
Aug 26 23:59:37.384: INFO: (17) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:462/proxy/: tls qux (200; 5.847971ms)
Aug 26 23:59:37.384: INFO: (17) /api/v1/namespaces/proxy-3360/services/http:proxy-service-69j9q:portname2/proxy/: bar (200; 6.313948ms)
Aug 26 23:59:37.384: INFO: (17) /api/v1/namespaces/proxy-3360/services/https:proxy-service-69j9q:tlsportname2/proxy/: tls qux (200; 6.442234ms)
Aug 26 23:59:37.384: INFO: (17) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-69j9q-br9bb:1080/proxy/: ... (200; 6.538082ms)
Aug 26 23:59:37.385: INFO: (17) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:460/proxy/: tls baz (200; 6.333453ms)
Aug 26 23:59:37.385: INFO: (17) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb:1080/proxy/: test<... (200; 6.451391ms)
Aug 26 23:59:37.385: INFO: (17) /api/v1/namespaces/proxy-3360/services/http:proxy-service-69j9q:portname1/proxy/: foo (200; 6.530102ms)
Aug 26 23:59:37.385: INFO: (17) /api/v1/namespaces/proxy-3360/services/https:proxy-service-69j9q:tlsportname1/proxy/: tls baz (200; 6.740341ms)
Aug 26 23:59:37.385: INFO: (17) /api/v1/namespaces/proxy-3360/services/proxy-service-69j9q:portname1/proxy/: foo (200; 6.73791ms)
Aug 26 23:59:37.385: INFO: (17) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb:162/proxy/: bar (200; 6.637409ms)
Aug 26 23:59:37.385: INFO: (17) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:443/proxy/: test (200; 3.492609ms)
Aug 26 23:59:37.390: INFO: (18) /api/v1/namespaces/proxy-3360/services/http:proxy-service-69j9q:portname1/proxy/: foo (200; 3.949101ms)
Aug 26 23:59:37.390: INFO: (18) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb:160/proxy/: foo (200; 3.721173ms)
Aug 26 23:59:37.390: INFO: (18) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:460/proxy/: tls baz (200; 4.20274ms)
Aug 26 23:59:37.390: INFO: (18) /api/v1/namespaces/proxy-3360/pods/http:proxy-service-69j9q-br9bb:1080/proxy/: ... (200; 4.046265ms)
Aug 26 23:59:37.390: INFO: (18) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb:1080/proxy/: test<... (200; 4.103722ms)
Aug 26 23:59:37.391: INFO: (18) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:443/proxy/: ... (200; 3.214838ms)
Aug 26 23:59:37.397: INFO: (19) /api/v1/namespaces/proxy-3360/pods/proxy-service-69j9q-br9bb:1080/proxy/: test<... (200; 4.18811ms)
Aug 26 23:59:37.397: INFO: (19) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:443/proxy/: test (200; 5.290356ms)
Aug 26 23:59:37.398: INFO: (19) /api/v1/namespaces/proxy-3360/services/http:proxy-service-69j9q:portname1/proxy/: foo (200; 5.363028ms)
Aug 26 23:59:37.399: INFO: (19) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:462/proxy/: tls qux (200; 6.147256ms)
Aug 26 23:59:37.399: INFO: (19) /api/v1/namespaces/proxy-3360/pods/https:proxy-service-69j9q-br9bb:460/proxy/: tls baz (200; 6.144346ms)
Aug 26 23:59:37.399: INFO: (19) /api/v1/namespaces/proxy-3360/services/http:proxy-service-69j9q:portname2/proxy/: bar (200; 6.172206ms)
Aug 26 23:59:37.399: INFO: (19) /api/v1/namespaces/proxy-3360/services/proxy-service-69j9q:portname2/proxy/: bar (200; 6.283504ms)
Aug 26 23:59:37.399: INFO: (19) /api/v1/namespaces/proxy-3360/services/proxy-service-69j9q:portname1/proxy/: foo (200; 6.288464ms)
STEP: deleting ReplicationController proxy-service-69j9q in namespace proxy-3360, will wait for the garbage collector to delete the pods
Aug 26 23:59:37.462: INFO: Deleting ReplicationController proxy-service-69j9q took: 9.04818ms
Aug 26 23:59:37.763: INFO: Terminating ReplicationController proxy-service-69j9q pods took: 300.803516ms
[AfterEach] version v1
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 26 23:59:39.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-3360" for this suite.
Aug 26 23:59:46.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 26 23:59:46.147: INFO: namespace proxy-3360 deletion completed in 6.172852191s

• [SLOW TEST:20.169 seconds]
[sig-network] Proxy
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy through a service and a pod  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 26 23:59:46.149: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-84cd53f4-1e20-443c-9f50-9100049af925 in namespace container-probe-2613
Aug 26 23:59:50.306: INFO: Started pod liveness-84cd53f4-1e20-443c-9f50-9100049af925 in namespace container-probe-2613
STEP: checking the pod's current state and verifying that restartCount is present
Aug 26 23:59:50.312: INFO: Initial restart count of pod liveness-84cd53f4-1e20-443c-9f50-9100049af925 is 0
Aug 27 00:00:08.420: INFO: Restart count of pod container-probe-2613/liveness-84cd53f4-1e20-443c-9f50-9100049af925 is now 1 (18.108187355s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:00:08.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2613" for this suite.
Aug 27 00:00:14.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:00:14.608: INFO: namespace container-probe-2613 deletion completed in 6.15886319s

• [SLOW TEST:28.459 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:00:14.610: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-04df1fdf-f070-4fca-aa51-bceb41ce3c6d
STEP: Creating a pod to test consume secrets
Aug 27 00:00:14.694: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8dfd9222-b26c-48fd-94ef-b2a98a46c4c3" in namespace "projected-2878" to be "success or failure"
Aug 27 00:00:14.719: INFO: Pod "pod-projected-secrets-8dfd9222-b26c-48fd-94ef-b2a98a46c4c3": Phase="Pending", Reason="", readiness=false. Elapsed: 24.821431ms
Aug 27 00:00:16.726: INFO: Pod "pod-projected-secrets-8dfd9222-b26c-48fd-94ef-b2a98a46c4c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031410261s
Aug 27 00:00:18.732: INFO: Pod "pod-projected-secrets-8dfd9222-b26c-48fd-94ef-b2a98a46c4c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03760863s
STEP: Saw pod success
Aug 27 00:00:18.732: INFO: Pod "pod-projected-secrets-8dfd9222-b26c-48fd-94ef-b2a98a46c4c3" satisfied condition "success or failure"
Aug 27 00:00:18.737: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-8dfd9222-b26c-48fd-94ef-b2a98a46c4c3 container projected-secret-volume-test: 
STEP: delete the pod
Aug 27 00:00:18.784: INFO: Waiting for pod pod-projected-secrets-8dfd9222-b26c-48fd-94ef-b2a98a46c4c3 to disappear
Aug 27 00:00:18.790: INFO: Pod pod-projected-secrets-8dfd9222-b26c-48fd-94ef-b2a98a46c4c3 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:00:18.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2878" for this suite.
Aug 27 00:00:24.832: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:00:24.962: INFO: namespace projected-2878 deletion completed in 6.165008783s

• [SLOW TEST:10.352 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:00:24.963: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Aug 27 00:00:25.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3076'
Aug 27 00:00:26.682: INFO: stderr: ""
Aug 27 00:00:26.682: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 27 00:00:26.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3076'
Aug 27 00:00:27.969: INFO: stderr: ""
Aug 27 00:00:27.969: INFO: stdout: "update-demo-nautilus-fpfnt update-demo-nautilus-wwj6l "
Aug 27 00:00:27.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fpfnt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3076'
Aug 27 00:00:29.277: INFO: stderr: ""
Aug 27 00:00:29.277: INFO: stdout: ""
Aug 27 00:00:29.277: INFO: update-demo-nautilus-fpfnt is created but not running
Aug 27 00:00:34.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3076'
Aug 27 00:00:35.548: INFO: stderr: ""
Aug 27 00:00:35.548: INFO: stdout: "update-demo-nautilus-fpfnt update-demo-nautilus-wwj6l "
Aug 27 00:00:35.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fpfnt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3076'
Aug 27 00:00:36.823: INFO: stderr: ""
Aug 27 00:00:36.823: INFO: stdout: "true"
Aug 27 00:00:36.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fpfnt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3076'
Aug 27 00:00:38.137: INFO: stderr: ""
Aug 27 00:00:38.137: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 27 00:00:38.138: INFO: validating pod update-demo-nautilus-fpfnt
Aug 27 00:00:38.144: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 27 00:00:38.144: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 27 00:00:38.144: INFO: update-demo-nautilus-fpfnt is verified up and running
Aug 27 00:00:38.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wwj6l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3076'
Aug 27 00:00:39.418: INFO: stderr: ""
Aug 27 00:00:39.418: INFO: stdout: "true"
Aug 27 00:00:39.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wwj6l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3076'
Aug 27 00:00:40.708: INFO: stderr: ""
Aug 27 00:00:40.708: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 27 00:00:40.708: INFO: validating pod update-demo-nautilus-wwj6l
Aug 27 00:00:40.714: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 27 00:00:40.715: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 27 00:00:40.715: INFO: update-demo-nautilus-wwj6l is verified up and running
STEP: using delete to clean up resources
Aug 27 00:00:40.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3076'
Aug 27 00:00:41.969: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 27 00:00:41.969: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Aug 27 00:00:41.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3076'
Aug 27 00:00:43.391: INFO: stderr: "No resources found.\n"
Aug 27 00:00:43.391: INFO: stdout: ""
Aug 27 00:00:43.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3076 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 27 00:00:44.714: INFO: stderr: ""
Aug 27 00:00:44.714: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:00:44.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3076" for this suite.
Aug 27 00:01:06.768: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:01:06.936: INFO: namespace kubectl-3076 deletion completed in 22.213741617s

• [SLOW TEST:41.973 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:01:06.939: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-f45894da-b7ec-4421-90b9-569f628e020d in namespace container-probe-477
Aug 27 00:01:11.139: INFO: Started pod test-webserver-f45894da-b7ec-4421-90b9-569f628e020d in namespace container-probe-477
STEP: checking the pod's current state and verifying that restartCount is present
Aug 27 00:01:11.144: INFO: Initial restart count of pod test-webserver-f45894da-b7ec-4421-90b9-569f628e020d is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:05:12.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-477" for this suite.
Aug 27 00:05:18.593: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:05:18.732: INFO: namespace container-probe-477 deletion completed in 6.163456953s

• [SLOW TEST:251.793 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:05:18.733: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Aug 27 00:05:20.003: INFO: Pod name wrapped-volume-race-885a5028-7272-4a56-8c3e-29e41f3a5676: Found 0 pods out of 5
Aug 27 00:05:25.026: INFO: Pod name wrapped-volume-race-885a5028-7272-4a56-8c3e-29e41f3a5676: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-885a5028-7272-4a56-8c3e-29e41f3a5676 in namespace emptydir-wrapper-3551, will wait for the garbage collector to delete the pods
Aug 27 00:05:39.287: INFO: Deleting ReplicationController wrapped-volume-race-885a5028-7272-4a56-8c3e-29e41f3a5676 took: 9.083351ms
Aug 27 00:05:39.587: INFO: Terminating ReplicationController wrapped-volume-race-885a5028-7272-4a56-8c3e-29e41f3a5676 pods took: 300.793376ms
STEP: Creating RC which spawns configmap-volume pods
Aug 27 00:06:24.484: INFO: Pod name wrapped-volume-race-7977f3bc-6996-44e2-96e5-7c83ede99b5c: Found 0 pods out of 5
Aug 27 00:06:29.672: INFO: Pod name wrapped-volume-race-7977f3bc-6996-44e2-96e5-7c83ede99b5c: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-7977f3bc-6996-44e2-96e5-7c83ede99b5c in namespace emptydir-wrapper-3551, will wait for the garbage collector to delete the pods
Aug 27 00:06:52.256: INFO: Deleting ReplicationController wrapped-volume-race-7977f3bc-6996-44e2-96e5-7c83ede99b5c took: 11.185924ms
Aug 27 00:06:52.757: INFO: Terminating ReplicationController wrapped-volume-race-7977f3bc-6996-44e2-96e5-7c83ede99b5c pods took: 501.013593ms
STEP: Creating RC which spawns configmap-volume pods
Aug 27 00:07:37.209: INFO: Pod name wrapped-volume-race-085d09fa-367e-4db9-aa43-fed190d7b729: Found 0 pods out of 5
Aug 27 00:07:42.254: INFO: Pod name wrapped-volume-race-085d09fa-367e-4db9-aa43-fed190d7b729: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-085d09fa-367e-4db9-aa43-fed190d7b729 in namespace emptydir-wrapper-3551, will wait for the garbage collector to delete the pods
Aug 27 00:08:05.533: INFO: Deleting ReplicationController wrapped-volume-race-085d09fa-367e-4db9-aa43-fed190d7b729 took: 269.871329ms
Aug 27 00:08:06.334: INFO: Terminating ReplicationController wrapped-volume-race-085d09fa-367e-4db9-aa43-fed190d7b729 pods took: 800.616177ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:08:54.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-3551" for this suite.
Aug 27 00:09:04.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:09:04.413: INFO: namespace emptydir-wrapper-3551 deletion completed in 10.277877175s

• [SLOW TEST:225.680 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:09:04.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-1189
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-1189
STEP: Deleting pre-stop pod
Aug 27 00:09:17.631: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:09:17.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-1189" for this suite.
Aug 27 00:09:58.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:09:58.166: INFO: namespace prestop-1189 deletion completed in 40.153606434s

• [SLOW TEST:53.753 seconds]
[k8s.io] [sig-node] PreStop
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:09:58.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 27 00:10:09.706: INFO: Waiting up to 5m0s for pod "client-envvars-8ebc145f-7435-4cb3-a4f8-b32297da2ff9" in namespace "pods-3456" to be "success or failure"
Aug 27 00:10:09.842: INFO: Pod "client-envvars-8ebc145f-7435-4cb3-a4f8-b32297da2ff9": Phase="Pending", Reason="", readiness=false. Elapsed: 135.428852ms
Aug 27 00:10:11.847: INFO: Pod "client-envvars-8ebc145f-7435-4cb3-a4f8-b32297da2ff9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.141004199s
Aug 27 00:10:13.854: INFO: Pod "client-envvars-8ebc145f-7435-4cb3-a4f8-b32297da2ff9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.147389043s
Aug 27 00:10:16.123: INFO: Pod "client-envvars-8ebc145f-7435-4cb3-a4f8-b32297da2ff9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.416416809s
Aug 27 00:10:18.128: INFO: Pod "client-envvars-8ebc145f-7435-4cb3-a4f8-b32297da2ff9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.421551864s
STEP: Saw pod success
Aug 27 00:10:18.128: INFO: Pod "client-envvars-8ebc145f-7435-4cb3-a4f8-b32297da2ff9" satisfied condition "success or failure"
Aug 27 00:10:18.131: INFO: Trying to get logs from node iruya-worker pod client-envvars-8ebc145f-7435-4cb3-a4f8-b32297da2ff9 container env3cont: 
STEP: delete the pod
Aug 27 00:10:18.183: INFO: Waiting for pod client-envvars-8ebc145f-7435-4cb3-a4f8-b32297da2ff9 to disappear
Aug 27 00:10:18.241: INFO: Pod client-envvars-8ebc145f-7435-4cb3-a4f8-b32297da2ff9 no longer exists
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:10:18.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3456" for this suite.
Aug 27 00:10:58.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:10:58.642: INFO: namespace pods-3456 deletion completed in 40.358644258s

• [SLOW TEST:60.474 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:10:58.644: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-980867b6-507d-4188-97ba-bb951659c625 in namespace container-probe-8637
Aug 27 00:11:05.083: INFO: Started pod liveness-980867b6-507d-4188-97ba-bb951659c625 in namespace container-probe-8637
STEP: checking the pod's current state and verifying that restartCount is present
Aug 27 00:11:05.087: INFO: Initial restart count of pod liveness-980867b6-507d-4188-97ba-bb951659c625 is 0
Aug 27 00:11:21.153: INFO: Restart count of pod container-probe-8637/liveness-980867b6-507d-4188-97ba-bb951659c625 is now 1 (16.066068308s elapsed)
Aug 27 00:11:41.224: INFO: Restart count of pod container-probe-8637/liveness-980867b6-507d-4188-97ba-bb951659c625 is now 2 (36.136452182s elapsed)
Aug 27 00:12:01.293: INFO: Restart count of pod container-probe-8637/liveness-980867b6-507d-4188-97ba-bb951659c625 is now 3 (56.205662871s elapsed)
Aug 27 00:12:21.359: INFO: Restart count of pod container-probe-8637/liveness-980867b6-507d-4188-97ba-bb951659c625 is now 4 (1m16.271985424s elapsed)
Aug 27 00:13:29.073: INFO: Restart count of pod container-probe-8637/liveness-980867b6-507d-4188-97ba-bb951659c625 is now 5 (2m23.985687073s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:13:29.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8637" for this suite.
Aug 27 00:13:35.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:13:35.531: INFO: namespace container-probe-8637 deletion completed in 6.201433043s

• [SLOW TEST:156.887 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:13:35.537: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:13:36.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4026" for this suite.
Aug 27 00:14:04.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:14:05.125: INFO: namespace pods-4026 deletion completed in 29.027199258s

• [SLOW TEST:29.589 seconds]
[k8s.io] [sig-node] Pods Extended
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:14:05.127: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Aug 27 00:14:05.819: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:14:23.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8395" for this suite.
Aug 27 00:14:30.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:14:30.142: INFO: namespace init-container-8395 deletion completed in 6.80680244s

• [SLOW TEST:25.015 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:14:30.143: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 27 00:14:36.983: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:14:37.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8182" for this suite.
Aug 27 00:14:43.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:14:43.280: INFO: namespace container-runtime-8182 deletion completed in 6.16308998s

• [SLOW TEST:13.137 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:14:43.281: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Aug 27 00:14:47.940: INFO: Successfully updated pod "annotationupdateeee25185-89f7-41c3-8971-10122d3a11d1"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:14:51.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6253" for this suite.
Aug 27 00:15:14.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:15:14.657: INFO: namespace projected-6253 deletion completed in 22.676213249s

• [SLOW TEST:31.376 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:15:14.660: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Aug 27 00:15:14.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1306'
Aug 27 00:15:23.820: INFO: stderr: ""
Aug 27 00:15:23.820: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 27 00:15:23.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1306'
Aug 27 00:15:25.164: INFO: stderr: ""
Aug 27 00:15:25.164: INFO: stdout: "update-demo-nautilus-gmvvd update-demo-nautilus-sxpfw "
Aug 27 00:15:25.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gmvvd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1306'
Aug 27 00:15:26.549: INFO: stderr: ""
Aug 27 00:15:26.549: INFO: stdout: ""
Aug 27 00:15:26.549: INFO: update-demo-nautilus-gmvvd is created but not running
Aug 27 00:15:31.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1306'
Aug 27 00:15:32.915: INFO: stderr: ""
Aug 27 00:15:32.915: INFO: stdout: "update-demo-nautilus-gmvvd update-demo-nautilus-sxpfw "
Aug 27 00:15:32.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gmvvd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1306'
Aug 27 00:15:34.204: INFO: stderr: ""
Aug 27 00:15:34.204: INFO: stdout: "true"
Aug 27 00:15:34.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gmvvd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1306'
Aug 27 00:15:35.482: INFO: stderr: ""
Aug 27 00:15:35.483: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 27 00:15:35.483: INFO: validating pod update-demo-nautilus-gmvvd
Aug 27 00:15:35.489: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 27 00:15:35.489: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 27 00:15:35.489: INFO: update-demo-nautilus-gmvvd is verified up and running
Aug 27 00:15:35.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sxpfw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1306'
Aug 27 00:15:36.772: INFO: stderr: ""
Aug 27 00:15:36.772: INFO: stdout: "true"
Aug 27 00:15:36.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sxpfw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1306'
Aug 27 00:15:38.038: INFO: stderr: ""
Aug 27 00:15:38.038: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 27 00:15:38.038: INFO: validating pod update-demo-nautilus-sxpfw
Aug 27 00:15:38.043: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 27 00:15:38.044: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 27 00:15:38.044: INFO: update-demo-nautilus-sxpfw is verified up and running
STEP: scaling down the replication controller
Aug 27 00:15:38.051: INFO: scanned /root for discovery docs: 
Aug 27 00:15:38.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-1306'
Aug 27 00:15:40.470: INFO: stderr: ""
Aug 27 00:15:40.470: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 27 00:15:40.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1306'
Aug 27 00:15:41.758: INFO: stderr: ""
Aug 27 00:15:41.758: INFO: stdout: "update-demo-nautilus-gmvvd update-demo-nautilus-sxpfw "
STEP: Replicas for name=update-demo: expected=1 actual=2
Aug 27 00:15:46.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1306'
Aug 27 00:15:48.032: INFO: stderr: ""
Aug 27 00:15:48.032: INFO: stdout: "update-demo-nautilus-gmvvd update-demo-nautilus-sxpfw "
STEP: Replicas for name=update-demo: expected=1 actual=2
Aug 27 00:15:53.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1306'
Aug 27 00:15:54.335: INFO: stderr: ""
Aug 27 00:15:54.335: INFO: stdout: "update-demo-nautilus-sxpfw "
Aug 27 00:15:54.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sxpfw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1306'
Aug 27 00:15:55.606: INFO: stderr: ""
Aug 27 00:15:55.606: INFO: stdout: "true"
Aug 27 00:15:55.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sxpfw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1306'
Aug 27 00:15:56.910: INFO: stderr: ""
Aug 27 00:15:56.910: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 27 00:15:56.910: INFO: validating pod update-demo-nautilus-sxpfw
Aug 27 00:15:56.915: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 27 00:15:56.915: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 27 00:15:56.915: INFO: update-demo-nautilus-sxpfw is verified up and running
STEP: scaling up the replication controller
Aug 27 00:15:56.922: INFO: scanned /root for discovery docs: 
Aug 27 00:15:56.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-1306'
Aug 27 00:15:59.325: INFO: stderr: ""
Aug 27 00:15:59.325: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 27 00:15:59.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1306'
Aug 27 00:16:00.625: INFO: stderr: ""
Aug 27 00:16:00.625: INFO: stdout: "update-demo-nautilus-2g9nj update-demo-nautilus-sxpfw "
Aug 27 00:16:00.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2g9nj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1306'
Aug 27 00:16:01.888: INFO: stderr: ""
Aug 27 00:16:01.888: INFO: stdout: "true"
Aug 27 00:16:01.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2g9nj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1306'
Aug 27 00:16:03.177: INFO: stderr: ""
Aug 27 00:16:03.177: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 27 00:16:03.177: INFO: validating pod update-demo-nautilus-2g9nj
Aug 27 00:16:03.183: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 27 00:16:03.183: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 27 00:16:03.183: INFO: update-demo-nautilus-2g9nj is verified up and running
Aug 27 00:16:03.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sxpfw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1306'
Aug 27 00:16:04.455: INFO: stderr: ""
Aug 27 00:16:04.455: INFO: stdout: "true"
Aug 27 00:16:04.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sxpfw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1306'
Aug 27 00:16:05.688: INFO: stderr: ""
Aug 27 00:16:05.688: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 27 00:16:05.688: INFO: validating pod update-demo-nautilus-sxpfw
Aug 27 00:16:05.694: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 27 00:16:05.694: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 27 00:16:05.694: INFO: update-demo-nautilus-sxpfw is verified up and running
STEP: using delete to clean up resources
Aug 27 00:16:05.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1306'
Aug 27 00:16:06.958: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 27 00:16:06.959: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Aug 27 00:16:06.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1306'
Aug 27 00:16:08.451: INFO: stderr: "No resources found.\n"
Aug 27 00:16:08.451: INFO: stdout: ""
Aug 27 00:16:08.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1306 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 27 00:16:09.725: INFO: stderr: ""
Aug 27 00:16:09.725: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:16:09.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1306" for this suite.
Aug 27 00:16:15.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:16:15.922: INFO: namespace kubectl-1306 deletion completed in 6.187654613s

• [SLOW TEST:61.262 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:16:15.925: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 27 00:16:20.168: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:16:20.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6999" for this suite.
Aug 27 00:16:26.364: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:16:26.515: INFO: namespace container-runtime-6999 deletion completed in 6.178992407s

• [SLOW TEST:10.590 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:16:26.517: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 27 00:16:26.749: INFO: Waiting up to 5m0s for pod "downwardapi-volume-53f458e2-b238-48c6-81a2-1e240283cf29" in namespace "projected-208" to be "success or failure"
Aug 27 00:16:26.755: INFO: Pod "downwardapi-volume-53f458e2-b238-48c6-81a2-1e240283cf29": Phase="Pending", Reason="", readiness=false. Elapsed: 5.388741ms
Aug 27 00:16:29.939: INFO: Pod "downwardapi-volume-53f458e2-b238-48c6-81a2-1e240283cf29": Phase="Pending", Reason="", readiness=false. Elapsed: 3.18941948s
Aug 27 00:16:31.946: INFO: Pod "downwardapi-volume-53f458e2-b238-48c6-81a2-1e240283cf29": Phase="Pending", Reason="", readiness=false. Elapsed: 5.196788416s
Aug 27 00:16:33.952: INFO: Pod "downwardapi-volume-53f458e2-b238-48c6-81a2-1e240283cf29": Phase="Pending", Reason="", readiness=false. Elapsed: 7.202883358s
Aug 27 00:16:35.959: INFO: Pod "downwardapi-volume-53f458e2-b238-48c6-81a2-1e240283cf29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.210113497s
STEP: Saw pod success
Aug 27 00:16:35.960: INFO: Pod "downwardapi-volume-53f458e2-b238-48c6-81a2-1e240283cf29" satisfied condition "success or failure"
Aug 27 00:16:35.981: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-53f458e2-b238-48c6-81a2-1e240283cf29 container client-container: 
STEP: delete the pod
Aug 27 00:16:36.039: INFO: Waiting for pod downwardapi-volume-53f458e2-b238-48c6-81a2-1e240283cf29 to disappear
Aug 27 00:16:36.043: INFO: Pod downwardapi-volume-53f458e2-b238-48c6-81a2-1e240283cf29 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:16:36.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-208" for this suite.
Aug 27 00:16:42.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:16:42.218: INFO: namespace projected-208 deletion completed in 6.166368779s

• [SLOW TEST:15.702 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:16:42.220: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Aug 27 00:16:42.283: INFO: namespace kubectl-5565
Aug 27 00:16:42.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5565'
Aug 27 00:16:44.422: INFO: stderr: ""
Aug 27 00:16:44.422: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Aug 27 00:16:45.432: INFO: Selector matched 1 pods for map[app:redis]
Aug 27 00:16:45.432: INFO: Found 0 / 1
Aug 27 00:16:46.431: INFO: Selector matched 1 pods for map[app:redis]
Aug 27 00:16:46.431: INFO: Found 0 / 1
Aug 27 00:16:47.431: INFO: Selector matched 1 pods for map[app:redis]
Aug 27 00:16:47.431: INFO: Found 0 / 1
Aug 27 00:16:48.430: INFO: Selector matched 1 pods for map[app:redis]
Aug 27 00:16:48.430: INFO: Found 0 / 1
Aug 27 00:16:49.431: INFO: Selector matched 1 pods for map[app:redis]
Aug 27 00:16:49.431: INFO: Found 1 / 1
Aug 27 00:16:49.431: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 27 00:16:49.438: INFO: Selector matched 1 pods for map[app:redis]
Aug 27 00:16:49.438: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 27 00:16:49.438: INFO: wait on redis-master startup in kubectl-5565 
Aug 27 00:16:49.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-bsxnq redis-master --namespace=kubectl-5565'
Aug 27 00:16:50.760: INFO: stderr: ""
Aug 27 00:16:50.760: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 27 Aug 00:16:47.834 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 27 Aug 00:16:47.834 # Server started, Redis version 3.2.12\n1:M 27 Aug 00:16:47.834 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 27 Aug 00:16:47.834 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Aug 27 00:16:50.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-5565'
Aug 27 00:16:52.213: INFO: stderr: ""
Aug 27 00:16:52.213: INFO: stdout: "service/rm2 exposed\n"
Aug 27 00:16:52.217: INFO: Service rm2 in namespace kubectl-5565 found.
STEP: exposing service
Aug 27 00:16:54.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-5565'
Aug 27 00:16:55.701: INFO: stderr: ""
Aug 27 00:16:55.701: INFO: stdout: "service/rm3 exposed\n"
Aug 27 00:16:55.706: INFO: Service rm3 in namespace kubectl-5565 found.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:16:57.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5565" for this suite.
Aug 27 00:17:19.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:17:20.363: INFO: namespace kubectl-5565 deletion completed in 22.636573619s

• [SLOW TEST:38.143 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:17:20.365: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-92727765-7135-481e-9fa8-7a44e34de9bf
STEP: Creating a pod to test consume secrets
Aug 27 00:17:20.947: INFO: Waiting up to 5m0s for pod "pod-secrets-e4ed636f-4712-41f2-9b05-e7d7541650e5" in namespace "secrets-1642" to be "success or failure"
Aug 27 00:17:21.189: INFO: Pod "pod-secrets-e4ed636f-4712-41f2-9b05-e7d7541650e5": Phase="Pending", Reason="", readiness=false. Elapsed: 242.3082ms
Aug 27 00:17:23.256: INFO: Pod "pod-secrets-e4ed636f-4712-41f2-9b05-e7d7541650e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.308826839s
Aug 27 00:17:25.261: INFO: Pod "pod-secrets-e4ed636f-4712-41f2-9b05-e7d7541650e5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.314171211s
Aug 27 00:17:27.268: INFO: Pod "pod-secrets-e4ed636f-4712-41f2-9b05-e7d7541650e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.321527911s
STEP: Saw pod success
Aug 27 00:17:27.269: INFO: Pod "pod-secrets-e4ed636f-4712-41f2-9b05-e7d7541650e5" satisfied condition "success or failure"
Aug 27 00:17:27.275: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-e4ed636f-4712-41f2-9b05-e7d7541650e5 container secret-env-test: 
STEP: delete the pod
Aug 27 00:17:27.302: INFO: Waiting for pod pod-secrets-e4ed636f-4712-41f2-9b05-e7d7541650e5 to disappear
Aug 27 00:17:27.339: INFO: Pod pod-secrets-e4ed636f-4712-41f2-9b05-e7d7541650e5 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:17:27.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1642" for this suite.
Aug 27 00:17:33.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:17:33.530: INFO: namespace secrets-1642 deletion completed in 6.18436395s

• [SLOW TEST:13.165 seconds]
[sig-api-machinery] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:17:33.532: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 27 00:17:33.625: INFO: Waiting up to 5m0s for pod "downwardapi-volume-091001f6-02c8-4931-8947-46c755e379c8" in namespace "projected-6991" to be "success or failure"
Aug 27 00:17:33.637: INFO: Pod "downwardapi-volume-091001f6-02c8-4931-8947-46c755e379c8": Phase="Pending", Reason="", readiness=false. Elapsed: 11.643422ms
Aug 27 00:17:35.645: INFO: Pod "downwardapi-volume-091001f6-02c8-4931-8947-46c755e379c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019497029s
Aug 27 00:17:37.652: INFO: Pod "downwardapi-volume-091001f6-02c8-4931-8947-46c755e379c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026898528s
STEP: Saw pod success
Aug 27 00:17:37.653: INFO: Pod "downwardapi-volume-091001f6-02c8-4931-8947-46c755e379c8" satisfied condition "success or failure"
Aug 27 00:17:37.658: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-091001f6-02c8-4931-8947-46c755e379c8 container client-container: 
STEP: delete the pod
Aug 27 00:17:37.698: INFO: Waiting for pod downwardapi-volume-091001f6-02c8-4931-8947-46c755e379c8 to disappear
Aug 27 00:17:37.708: INFO: Pod downwardapi-volume-091001f6-02c8-4931-8947-46c755e379c8 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:17:37.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6991" for this suite.
Aug 27 00:17:43.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:17:43.876: INFO: namespace projected-6991 deletion completed in 6.157943929s

• [SLOW TEST:10.344 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:17:43.879: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-5h5s
STEP: Creating a pod to test atomic-volume-subpath
Aug 27 00:17:43.980: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-5h5s" in namespace "subpath-201" to be "success or failure"
Aug 27 00:17:43.997: INFO: Pod "pod-subpath-test-downwardapi-5h5s": Phase="Pending", Reason="", readiness=false. Elapsed: 16.714353ms
Aug 27 00:17:46.179: INFO: Pod "pod-subpath-test-downwardapi-5h5s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19825811s
Aug 27 00:17:48.185: INFO: Pod "pod-subpath-test-downwardapi-5h5s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.204711144s
Aug 27 00:17:50.192: INFO: Pod "pod-subpath-test-downwardapi-5h5s": Phase="Running", Reason="", readiness=true. Elapsed: 6.21125256s
Aug 27 00:17:52.198: INFO: Pod "pod-subpath-test-downwardapi-5h5s": Phase="Running", Reason="", readiness=true. Elapsed: 8.217825115s
Aug 27 00:17:54.205: INFO: Pod "pod-subpath-test-downwardapi-5h5s": Phase="Running", Reason="", readiness=true. Elapsed: 10.224139278s
Aug 27 00:17:56.211: INFO: Pod "pod-subpath-test-downwardapi-5h5s": Phase="Running", Reason="", readiness=true. Elapsed: 12.230688585s
Aug 27 00:17:58.218: INFO: Pod "pod-subpath-test-downwardapi-5h5s": Phase="Running", Reason="", readiness=true. Elapsed: 14.237185111s
Aug 27 00:18:00.223: INFO: Pod "pod-subpath-test-downwardapi-5h5s": Phase="Running", Reason="", readiness=true. Elapsed: 16.242769125s
Aug 27 00:18:02.230: INFO: Pod "pod-subpath-test-downwardapi-5h5s": Phase="Running", Reason="", readiness=true. Elapsed: 18.24912728s
Aug 27 00:18:04.236: INFO: Pod "pod-subpath-test-downwardapi-5h5s": Phase="Running", Reason="", readiness=true. Elapsed: 20.256039896s
Aug 27 00:18:06.244: INFO: Pod "pod-subpath-test-downwardapi-5h5s": Phase="Running", Reason="", readiness=true. Elapsed: 22.263962922s
Aug 27 00:18:08.252: INFO: Pod "pod-subpath-test-downwardapi-5h5s": Phase="Running", Reason="", readiness=true. Elapsed: 24.271079981s
Aug 27 00:18:10.258: INFO: Pod "pod-subpath-test-downwardapi-5h5s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.277760575s
STEP: Saw pod success
Aug 27 00:18:10.258: INFO: Pod "pod-subpath-test-downwardapi-5h5s" satisfied condition "success or failure"
Aug 27 00:18:10.263: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-downwardapi-5h5s container test-container-subpath-downwardapi-5h5s: 
STEP: delete the pod
Aug 27 00:18:10.293: INFO: Waiting for pod pod-subpath-test-downwardapi-5h5s to disappear
Aug 27 00:18:10.298: INFO: Pod pod-subpath-test-downwardapi-5h5s no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-5h5s
Aug 27 00:18:10.298: INFO: Deleting pod "pod-subpath-test-downwardapi-5h5s" in namespace "subpath-201"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:18:10.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-201" for this suite.
Aug 27 00:18:18.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:18:18.468: INFO: namespace subpath-201 deletion completed in 8.159790413s

• [SLOW TEST:34.589 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:18:18.471: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 27 00:18:18.552: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Aug 27 00:18:18.602: INFO: Pod name sample-pod: Found 0 pods out of 1
Aug 27 00:18:23.609: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 27 00:18:23.610: INFO: Creating deployment "test-rolling-update-deployment"
Aug 27 00:18:23.619: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Aug 27 00:18:23.632: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Aug 27 00:18:25.670: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Aug 27 00:18:25.673: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734084303, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734084303, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734084303, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734084303, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 00:18:27.680: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734084303, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734084303, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734084303, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734084303, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 00:18:29.679: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Aug 27 00:18:29.697: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-1103,SelfLink:/apis/apps/v1/namespaces/deployment-1103/deployments/test-rolling-update-deployment,UID:c3d2a601-81ef-4662-a642-365d2a0af550,ResourceVersion:3053824,Generation:1,CreationTimestamp:2020-08-27 00:18:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-08-27 00:18:23 +0000 UTC 2020-08-27 00:18:23 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-08-27 00:18:28 +0000 UTC 2020-08-27 00:18:23 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Aug 27 00:18:29.705: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-1103,SelfLink:/apis/apps/v1/namespaces/deployment-1103/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:c8564553-eb9d-425b-8523-6f67c339a73f,ResourceVersion:3053812,Generation:1,CreationTimestamp:2020-08-27 00:18:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment c3d2a601-81ef-4662-a642-365d2a0af550 0x40030f18c7 0x40030f18c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Aug 27 00:18:29.705: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Aug 27 00:18:29.706: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-1103,SelfLink:/apis/apps/v1/namespaces/deployment-1103/replicasets/test-rolling-update-controller,UID:2df525ec-e5a2-4356-a949-cb7231080ab0,ResourceVersion:3053822,Generation:2,CreationTimestamp:2020-08-27 00:18:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment c3d2a601-81ef-4662-a642-365d2a0af550 0x40030f17df 0x40030f17f0}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug 27 00:18:29.711: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-wtpsq" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-wtpsq,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-1103,SelfLink:/api/v1/namespaces/deployment-1103/pods/test-rolling-update-deployment-79f6b9d75c-wtpsq,UID:64772a51-4c0c-43d3-918e-19fb4e535ac2,ResourceVersion:3053811,Generation:0,CreationTimestamp:2020-08-27 00:18:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c c8564553-eb9d-425b-8523-6f67c339a73f 0x4002dca197 0x4002dca198}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dpzvk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dpzvk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-dpzvk true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4002dca220} {node.kubernetes.io/unreachable Exists  NoExecute 0x4002dca240}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:18:23 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:18:28 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:18:28 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:18:23 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.124,StartTime:2020-08-27 00:18:23 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-08-27 00:18:27 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://ccf0335cb71805c88b2f7b5e992a48c3127c83d9f0ca497013c943ef66b26893}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:18:29.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1103" for this suite.
Aug 27 00:18:35.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:18:36.084: INFO: namespace deployment-1103 deletion completed in 6.364425597s

• [SLOW TEST:17.613 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:18:36.085: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-6571/secret-test-607387ea-c17a-49db-9a98-2782f5774749
STEP: Creating a pod to test consume secrets
Aug 27 00:18:36.248: INFO: Waiting up to 5m0s for pod "pod-configmaps-c48d5688-3f62-4a0b-84bb-526599b680c5" in namespace "secrets-6571" to be "success or failure"
Aug 27 00:18:36.267: INFO: Pod "pod-configmaps-c48d5688-3f62-4a0b-84bb-526599b680c5": Phase="Pending", Reason="", readiness=false. Elapsed: 17.918669ms
Aug 27 00:18:38.273: INFO: Pod "pod-configmaps-c48d5688-3f62-4a0b-84bb-526599b680c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023931061s
Aug 27 00:18:40.278: INFO: Pod "pod-configmaps-c48d5688-3f62-4a0b-84bb-526599b680c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029235209s
STEP: Saw pod success
Aug 27 00:18:40.278: INFO: Pod "pod-configmaps-c48d5688-3f62-4a0b-84bb-526599b680c5" satisfied condition "success or failure"
Aug 27 00:18:40.282: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-c48d5688-3f62-4a0b-84bb-526599b680c5 container env-test: 
STEP: delete the pod
Aug 27 00:18:40.497: INFO: Waiting for pod pod-configmaps-c48d5688-3f62-4a0b-84bb-526599b680c5 to disappear
Aug 27 00:18:40.563: INFO: Pod pod-configmaps-c48d5688-3f62-4a0b-84bb-526599b680c5 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:18:40.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6571" for this suite.
Aug 27 00:18:46.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:18:46.837: INFO: namespace secrets-6571 deletion completed in 6.250142704s

• [SLOW TEST:10.752 seconds]
[sig-api-machinery] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:18:46.838: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-30ec6b26-d72f-4be1-929a-c6460c7264e1
STEP: Creating a pod to test consume configMaps
Aug 27 00:18:46.952: INFO: Waiting up to 5m0s for pod "pod-configmaps-7ded6112-357e-432b-8abd-cd484c7c7dc0" in namespace "configmap-9790" to be "success or failure"
Aug 27 00:18:46.986: INFO: Pod "pod-configmaps-7ded6112-357e-432b-8abd-cd484c7c7dc0": Phase="Pending", Reason="", readiness=false. Elapsed: 34.505192ms
Aug 27 00:18:48.993: INFO: Pod "pod-configmaps-7ded6112-357e-432b-8abd-cd484c7c7dc0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040895502s
Aug 27 00:18:51.000: INFO: Pod "pod-configmaps-7ded6112-357e-432b-8abd-cd484c7c7dc0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047853368s
STEP: Saw pod success
Aug 27 00:18:51.000: INFO: Pod "pod-configmaps-7ded6112-357e-432b-8abd-cd484c7c7dc0" satisfied condition "success or failure"
Aug 27 00:18:51.003: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-7ded6112-357e-432b-8abd-cd484c7c7dc0 container configmap-volume-test: 
STEP: delete the pod
Aug 27 00:18:51.060: INFO: Waiting for pod pod-configmaps-7ded6112-357e-432b-8abd-cd484c7c7dc0 to disappear
Aug 27 00:18:51.069: INFO: Pod pod-configmaps-7ded6112-357e-432b-8abd-cd484c7c7dc0 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:18:51.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9790" for this suite.
Aug 27 00:18:57.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:18:57.274: INFO: namespace configmap-9790 deletion completed in 6.194980651s

• [SLOW TEST:10.436 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:18:57.275: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Aug 27 00:18:57.410: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Aug 27 00:18:57.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8964'
Aug 27 00:18:59.158: INFO: stderr: ""
Aug 27 00:18:59.158: INFO: stdout: "service/redis-slave created\n"
Aug 27 00:18:59.159: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Aug 27 00:18:59.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8964'
Aug 27 00:19:00.932: INFO: stderr: ""
Aug 27 00:19:00.932: INFO: stdout: "service/redis-master created\n"
Aug 27 00:19:00.933: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Aug 27 00:19:00.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8964'
Aug 27 00:19:02.776: INFO: stderr: ""
Aug 27 00:19:02.776: INFO: stdout: "service/frontend created\n"
Aug 27 00:19:02.778: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Aug 27 00:19:02.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8964'
Aug 27 00:19:05.842: INFO: stderr: ""
Aug 27 00:19:05.842: INFO: stdout: "deployment.apps/frontend created\n"
Aug 27 00:19:05.843: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Aug 27 00:19:05.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8964'
Aug 27 00:19:07.752: INFO: stderr: ""
Aug 27 00:19:07.752: INFO: stdout: "deployment.apps/redis-master created\n"
Aug 27 00:19:07.754: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Aug 27 00:19:07.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8964'
Aug 27 00:19:10.664: INFO: stderr: ""
Aug 27 00:19:10.665: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Aug 27 00:19:10.665: INFO: Waiting for all frontend pods to be Running.
Aug 27 00:19:25.718: INFO: Waiting for frontend to serve content.
Aug 27 00:19:25.748: INFO: Trying to add a new entry to the guestbook.
Aug 27 00:19:25.769: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Aug 27 00:19:25.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8964'
Aug 27 00:19:27.086: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 27 00:19:27.086: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Aug 27 00:19:27.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8964'
Aug 27 00:19:28.979: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 27 00:19:28.979: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Aug 27 00:19:28.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8964'
Aug 27 00:19:30.278: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 27 00:19:30.278: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Aug 27 00:19:30.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8964'
Aug 27 00:19:31.538: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 27 00:19:31.538: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Aug 27 00:19:31.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8964'
Aug 27 00:19:32.843: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 27 00:19:32.843: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Aug 27 00:19:32.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8964'
Aug 27 00:19:34.155: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 27 00:19:34.155: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:19:34.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8964" for this suite.
Aug 27 00:20:14.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:20:14.590: INFO: namespace kubectl-8964 deletion completed in 40.337486668s

• [SLOW TEST:77.316 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:20:14.591: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-580
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Aug 27 00:20:14.772: INFO: Found 0 stateful pods, waiting for 3
Aug 27 00:20:24.784: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 00:20:24.784: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 00:20:24.784: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 27 00:20:34.811: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 00:20:34.811: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 00:20:34.811: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Aug 27 00:20:34.854: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Aug 27 00:20:45.015: INFO: Updating stateful set ss2
Aug 27 00:20:45.224: INFO: Waiting for Pod statefulset-580/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Aug 27 00:20:56.102: INFO: Found 2 stateful pods, waiting for 3
Aug 27 00:21:06.249: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 00:21:06.249: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 00:21:06.249: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Aug 27 00:21:06.389: INFO: Updating stateful set ss2
Aug 27 00:21:06.420: INFO: Waiting for Pod statefulset-580/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 27 00:21:16.456: INFO: Updating stateful set ss2
Aug 27 00:21:16.601: INFO: Waiting for StatefulSet statefulset-580/ss2 to complete update
Aug 27 00:21:16.601: INFO: Waiting for Pod statefulset-580/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 27 00:21:26.790: INFO: Waiting for StatefulSet statefulset-580/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Aug 27 00:21:36.617: INFO: Deleting all statefulset in ns statefulset-580
Aug 27 00:21:36.622: INFO: Scaling statefulset ss2 to 0
Aug 27 00:21:56.663: INFO: Waiting for statefulset status.replicas updated to 0
Aug 27 00:21:56.667: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:21:56.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-580" for this suite.
Aug 27 00:22:08.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:22:08.817: INFO: namespace statefulset-580 deletion completed in 12.129475487s

• [SLOW TEST:114.226 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:22:08.818: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Aug 27 00:22:09.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1658'
Aug 27 00:22:10.693: INFO: stderr: ""
Aug 27 00:22:10.693: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 27 00:22:10.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1658'
Aug 27 00:22:12.040: INFO: stderr: ""
Aug 27 00:22:12.040: INFO: stdout: "update-demo-nautilus-dd7jf update-demo-nautilus-dg8mb "
Aug 27 00:22:12.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dd7jf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1658'
Aug 27 00:22:13.492: INFO: stderr: ""
Aug 27 00:22:13.492: INFO: stdout: ""
Aug 27 00:22:13.492: INFO: update-demo-nautilus-dd7jf is created but not running
Aug 27 00:22:18.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1658'
Aug 27 00:22:19.804: INFO: stderr: ""
Aug 27 00:22:19.804: INFO: stdout: "update-demo-nautilus-dd7jf update-demo-nautilus-dg8mb "
Aug 27 00:22:19.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dd7jf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1658'
Aug 27 00:22:21.083: INFO: stderr: ""
Aug 27 00:22:21.083: INFO: stdout: "true"
Aug 27 00:22:21.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dd7jf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1658'
Aug 27 00:22:22.359: INFO: stderr: ""
Aug 27 00:22:22.360: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 27 00:22:22.360: INFO: validating pod update-demo-nautilus-dd7jf
Aug 27 00:22:22.365: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 27 00:22:22.365: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 27 00:22:22.365: INFO: update-demo-nautilus-dd7jf is verified up and running
Aug 27 00:22:22.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dg8mb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1658'
Aug 27 00:22:23.660: INFO: stderr: ""
Aug 27 00:22:23.660: INFO: stdout: "true"
Aug 27 00:22:23.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dg8mb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1658'
Aug 27 00:22:24.950: INFO: stderr: ""
Aug 27 00:22:24.951: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 27 00:22:24.951: INFO: validating pod update-demo-nautilus-dg8mb
Aug 27 00:22:24.955: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 27 00:22:24.955: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 27 00:22:24.955: INFO: update-demo-nautilus-dg8mb is verified up and running
STEP: rolling-update to new replication controller
Aug 27 00:22:24.961: INFO: scanned /root for discovery docs: 
Aug 27 00:22:24.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-1658'
Aug 27 00:22:56.976: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Aug 27 00:22:56.976: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 27 00:22:56.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1658'
Aug 27 00:22:58.282: INFO: stderr: ""
Aug 27 00:22:58.282: INFO: stdout: "update-demo-kitten-2th9q update-demo-kitten-bsn2l "
Aug 27 00:22:58.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-2th9q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1658'
Aug 27 00:22:59.606: INFO: stderr: ""
Aug 27 00:22:59.606: INFO: stdout: "true"
Aug 27 00:22:59.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-2th9q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1658'
Aug 27 00:23:00.889: INFO: stderr: ""
Aug 27 00:23:00.889: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Aug 27 00:23:00.889: INFO: validating pod update-demo-kitten-2th9q
Aug 27 00:23:00.894: INFO: got data: {
  "image": "kitten.jpg"
}

Aug 27 00:23:00.894: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Aug 27 00:23:00.895: INFO: update-demo-kitten-2th9q is verified up and running
Aug 27 00:23:00.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-bsn2l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1658'
Aug 27 00:23:02.198: INFO: stderr: ""
Aug 27 00:23:02.199: INFO: stdout: "true"
Aug 27 00:23:02.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-bsn2l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1658'
Aug 27 00:23:03.489: INFO: stderr: ""
Aug 27 00:23:03.489: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Aug 27 00:23:03.490: INFO: validating pod update-demo-kitten-bsn2l
Aug 27 00:23:03.496: INFO: got data: {
  "image": "kitten.jpg"
}

Aug 27 00:23:03.496: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Aug 27 00:23:03.496: INFO: update-demo-kitten-bsn2l is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:23:03.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1658" for this suite.
Aug 27 00:23:27.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:23:27.676: INFO: namespace kubectl-1658 deletion completed in 24.172299506s

• [SLOW TEST:78.858 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:23:27.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 27 00:23:27.902: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"8680ff8a-170e-4e62-ba45-342d69936d6d", Controller:(*bool)(0x400334bb9a), BlockOwnerDeletion:(*bool)(0x400334bb9b)}}
Aug 27 00:23:27.944: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"0ac15a38-dd84-479a-813e-cb0d6cc144bf", Controller:(*bool)(0x400357b37a), BlockOwnerDeletion:(*bool)(0x400357b37b)}}
Aug 27 00:23:27.949: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"63bd892d-6ced-4959-9851-1c94aa8b3f2f", Controller:(*bool)(0x400357b52a), BlockOwnerDeletion:(*bool)(0x400357b52b)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:23:32.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9542" for this suite.
Aug 27 00:23:41.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:23:41.180: INFO: namespace gc-9542 deletion completed in 8.180811525s

• [SLOW TEST:13.502 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:23:41.181: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-659825da-d8f0-4c84-a25c-bcd417822758
STEP: Creating a pod to test consume configMaps
Aug 27 00:23:41.850: INFO: Waiting up to 5m0s for pod "pod-configmaps-d7470584-980d-4474-9793-328125a06122" in namespace "configmap-9088" to be "success or failure"
Aug 27 00:23:42.066: INFO: Pod "pod-configmaps-d7470584-980d-4474-9793-328125a06122": Phase="Pending", Reason="", readiness=false. Elapsed: 215.937711ms
Aug 27 00:23:44.072: INFO: Pod "pod-configmaps-d7470584-980d-4474-9793-328125a06122": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222129085s
Aug 27 00:23:46.079: INFO: Pod "pod-configmaps-d7470584-980d-4474-9793-328125a06122": Phase="Pending", Reason="", readiness=false. Elapsed: 4.228957024s
Aug 27 00:23:48.086: INFO: Pod "pod-configmaps-d7470584-980d-4474-9793-328125a06122": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.23625218s
STEP: Saw pod success
Aug 27 00:23:48.086: INFO: Pod "pod-configmaps-d7470584-980d-4474-9793-328125a06122" satisfied condition "success or failure"
Aug 27 00:23:48.097: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-d7470584-980d-4474-9793-328125a06122 container configmap-volume-test: 
STEP: delete the pod
Aug 27 00:23:48.117: INFO: Waiting for pod pod-configmaps-d7470584-980d-4474-9793-328125a06122 to disappear
Aug 27 00:23:48.122: INFO: Pod pod-configmaps-d7470584-980d-4474-9793-328125a06122 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:23:48.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9088" for this suite.
Aug 27 00:23:54.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:23:54.356: INFO: namespace configmap-9088 deletion completed in 6.225713784s

• [SLOW TEST:13.175 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:23:54.357: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-62b887b7-15b7-46ad-8998-088eedaa567e
STEP: Creating a pod to test consume configMaps
Aug 27 00:23:54.471: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-53c057bf-2be8-4cfa-ab3c-e1bb5ffcf89c" in namespace "projected-3987" to be "success or failure"
Aug 27 00:23:54.490: INFO: Pod "pod-projected-configmaps-53c057bf-2be8-4cfa-ab3c-e1bb5ffcf89c": Phase="Pending", Reason="", readiness=false. Elapsed: 19.394028ms
Aug 27 00:23:56.760: INFO: Pod "pod-projected-configmaps-53c057bf-2be8-4cfa-ab3c-e1bb5ffcf89c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289191035s
Aug 27 00:23:58.908: INFO: Pod "pod-projected-configmaps-53c057bf-2be8-4cfa-ab3c-e1bb5ffcf89c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.437423065s
STEP: Saw pod success
Aug 27 00:23:58.909: INFO: Pod "pod-projected-configmaps-53c057bf-2be8-4cfa-ab3c-e1bb5ffcf89c" satisfied condition "success or failure"
Aug 27 00:23:58.956: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-53c057bf-2be8-4cfa-ab3c-e1bb5ffcf89c container projected-configmap-volume-test: 
STEP: delete the pod
Aug 27 00:23:59.358: INFO: Waiting for pod pod-projected-configmaps-53c057bf-2be8-4cfa-ab3c-e1bb5ffcf89c to disappear
Aug 27 00:23:59.566: INFO: Pod pod-projected-configmaps-53c057bf-2be8-4cfa-ab3c-e1bb5ffcf89c no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:23:59.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3987" for this suite.
Aug 27 00:24:05.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:24:05.788: INFO: namespace projected-3987 deletion completed in 6.211377384s

• [SLOW TEST:11.431 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:24:05.791: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Aug 27 00:24:05.855: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:24:07.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9399" for this suite.
Aug 27 00:24:13.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:24:13.224: INFO: namespace kubectl-9399 deletion completed in 6.154494126s

• [SLOW TEST:7.434 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:24:13.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:24:21.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-7753" for this suite.
Aug 27 00:24:29.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:24:29.507: INFO: namespace namespaces-7753 deletion completed in 8.136508534s
STEP: Destroying namespace "nsdeletetest-9321" for this suite.
Aug 27 00:24:29.510: INFO: Namespace nsdeletetest-9321 was already deleted
STEP: Destroying namespace "nsdeletetest-8881" for this suite.
Aug 27 00:24:35.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:24:35.669: INFO: namespace nsdeletetest-8881 deletion completed in 6.158748686s

• [SLOW TEST:22.440 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:24:35.670: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Aug 27 00:24:35.751: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:24:54.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4521" for this suite.
Aug 27 00:25:05.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:25:05.260: INFO: namespace pods-4521 deletion completed in 10.471394543s

• [SLOW TEST:29.591 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:25:05.261: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0827 00:25:36.125966       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 27 00:25:36.126: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:25:36.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7503" for this suite.
Aug 27 00:25:42.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:25:42.611: INFO: namespace gc-7503 deletion completed in 6.476923949s

• [SLOW TEST:37.350 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:25:42.613: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 27 00:25:42.877: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Aug 27 00:25:42.896: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 00:25:42.916: INFO: Number of nodes with available pods: 0
Aug 27 00:25:42.916: INFO: Node iruya-worker is running more than one daemon pod
Aug 27 00:25:43.983: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 00:25:43.988: INFO: Number of nodes with available pods: 0
Aug 27 00:25:43.988: INFO: Node iruya-worker is running more than one daemon pod
Aug 27 00:25:44.936: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 00:25:45.000: INFO: Number of nodes with available pods: 0
Aug 27 00:25:45.000: INFO: Node iruya-worker is running more than one daemon pod
Aug 27 00:25:46.866: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 00:25:46.947: INFO: Number of nodes with available pods: 0
Aug 27 00:25:46.947: INFO: Node iruya-worker is running more than one daemon pod
Aug 27 00:25:47.929: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 00:25:47.935: INFO: Number of nodes with available pods: 0
Aug 27 00:25:47.935: INFO: Node iruya-worker is running more than one daemon pod
Aug 27 00:25:49.018: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 00:25:49.294: INFO: Number of nodes with available pods: 0
Aug 27 00:25:49.294: INFO: Node iruya-worker is running more than one daemon pod
Aug 27 00:25:49.929: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 00:25:50.019: INFO: Number of nodes with available pods: 1
Aug 27 00:25:50.019: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 27 00:25:50.951: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 00:25:50.957: INFO: Number of nodes with available pods: 2
Aug 27 00:25:50.957: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Aug 27 00:25:51.030: INFO: Wrong image for pod: daemon-set-ll6rw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 00:25:51.030: INFO: Wrong image for pod: daemon-set-m6gkl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 00:25:51.047: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 00:25:52.055: INFO: Wrong image for pod: daemon-set-ll6rw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 00:25:52.055: INFO: Wrong image for pod: daemon-set-m6gkl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 00:25:52.064: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 00:25:53.056: INFO: Wrong image for pod: daemon-set-ll6rw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 00:25:53.056: INFO: Wrong image for pod: daemon-set-m6gkl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 00:25:53.066: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 00:25:54.056: INFO: Wrong image for pod: daemon-set-ll6rw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 00:25:54.056: INFO: Pod daemon-set-ll6rw is not available
Aug 27 00:25:54.056: INFO: Wrong image for pod: daemon-set-m6gkl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 00:25:54.066: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 00:25:55.054: INFO: Wrong image for pod: daemon-set-ll6rw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 00:25:55.054: INFO: Pod daemon-set-ll6rw is not available
Aug 27 00:25:55.054: INFO: Wrong image for pod: daemon-set-m6gkl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 00:25:55.062: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 00:25:56.055: INFO: Wrong image for pod: daemon-set-ll6rw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 00:25:56.055: INFO: Pod daemon-set-ll6rw is not available
Aug 27 00:25:56.056: INFO: Wrong image for pod: daemon-set-m6gkl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 00:25:56.064: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 00:25:57.056: INFO: Wrong image for pod: daemon-set-ll6rw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 00:25:57.056: INFO: Pod daemon-set-ll6rw is not available
Aug 27 00:25:57.056: INFO: Wrong image for pod: daemon-set-m6gkl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 00:25:57.063: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 00:25:58.056: INFO: Wrong image for pod: daemon-set-ll6rw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 00:25:58.056: INFO: Pod daemon-set-ll6rw is not available
Aug 27 00:25:58.056: INFO: Wrong image for pod: daemon-set-m6gkl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 00:25:58.066: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 00:25:59.056: INFO: Wrong image for pod: daemon-set-ll6rw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 00:25:59.056: INFO: Pod daemon-set-ll6rw is not available
Aug 27 00:25:59.056: INFO: Wrong image for pod: daemon-set-m6gkl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 00:25:59.063: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 00:26:00.055: INFO: Wrong image for pod: daemon-set-ll6rw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 00:26:00.055: INFO: Pod daemon-set-ll6rw is not available
Aug 27 00:26:00.055: INFO: Wrong image for pod: daemon-set-m6gkl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 00:26:00.065: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 00:26:01.055: INFO: Wrong image for pod: daemon-set-ll6rw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 00:26:01.055: INFO: Pod daemon-set-ll6rw is not available
Aug 27 00:26:01.055: INFO: Wrong image for pod: daemon-set-m6gkl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 00:26:01.065: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 00:26:02.056: INFO: Wrong image for pod: daemon-set-ll6rw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 00:26:02.057: INFO: Pod daemon-set-ll6rw is not available
Aug 27 00:26:02.057: INFO: Wrong image for pod: daemon-set-m6gkl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 00:26:02.066: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 00:26:03.055: INFO: Wrong image for pod: daemon-set-ll6rw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 00:26:03.056: INFO: Pod daemon-set-ll6rw is not available
Aug 27 00:26:03.056: INFO: Wrong image for pod: daemon-set-m6gkl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 00:26:03.065: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 00:26:04.055: INFO: Pod daemon-set-7nqlc is not available
Aug 27 00:26:04.055: INFO: Wrong image for pod: daemon-set-m6gkl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 00:26:04.064: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 00:26:05.055: INFO: Pod daemon-set-7nqlc is not available
Aug 27 00:26:05.055: INFO: Wrong image for pod: daemon-set-m6gkl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 00:26:05.065: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 00:26:06.056: INFO: Pod daemon-set-7nqlc is not available
Aug 27 00:26:06.056: INFO: Wrong image for pod: daemon-set-m6gkl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 00:26:06.064: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 00:26:07.054: INFO: Wrong image for pod: daemon-set-m6gkl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 00:26:07.062: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 00:26:08.109: INFO: Wrong image for pod: daemon-set-m6gkl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 00:26:08.118: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 00:26:09.055: INFO: Wrong image for pod: daemon-set-m6gkl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 27 00:26:09.055: INFO: Pod daemon-set-m6gkl is not available
Aug 27 00:26:09.064: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 00:26:10.056: INFO: Pod daemon-set-7kqq5 is not available
Aug 27 00:26:10.067: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Aug 27 00:26:10.076: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 00:26:10.082: INFO: Number of nodes with available pods: 1
Aug 27 00:26:10.082: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 27 00:26:11.090: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 00:26:11.094: INFO: Number of nodes with available pods: 1
Aug 27 00:26:11.094: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 27 00:26:12.092: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 00:26:12.098: INFO: Number of nodes with available pods: 1
Aug 27 00:26:12.098: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 27 00:26:13.093: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 00:26:13.098: INFO: Number of nodes with available pods: 1
Aug 27 00:26:13.098: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 27 00:26:14.218: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 00:26:14.297: INFO: Number of nodes with available pods: 1
Aug 27 00:26:14.297: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 27 00:26:15.186: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 00:26:15.211: INFO: Number of nodes with available pods: 1
Aug 27 00:26:15.211: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 27 00:26:16.093: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 00:26:16.098: INFO: Number of nodes with available pods: 2
Aug 27 00:26:16.098: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1718, will wait for the garbage collector to delete the pods
Aug 27 00:26:16.189: INFO: Deleting DaemonSet.extensions daemon-set took: 5.979015ms
Aug 27 00:26:16.490: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.791665ms
Aug 27 00:26:23.677: INFO: Number of nodes with available pods: 0
Aug 27 00:26:23.677: INFO: Number of running nodes: 0, number of available pods: 0
Aug 27 00:26:24.024: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1718/daemonsets","resourceVersion":"3055686"},"items":null}

Aug 27 00:26:24.027: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1718/pods","resourceVersion":"3055686"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:26:24.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1718" for this suite.
Aug 27 00:26:30.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:26:30.661: INFO: namespace daemonsets-1718 deletion completed in 6.606812882s

• [SLOW TEST:48.048 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:26:30.663: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-zdc7
STEP: Creating a pod to test atomic-volume-subpath
Aug 27 00:26:30.763: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-zdc7" in namespace "subpath-217" to be "success or failure"
Aug 27 00:26:30.790: INFO: Pod "pod-subpath-test-configmap-zdc7": Phase="Pending", Reason="", readiness=false. Elapsed: 27.152746ms
Aug 27 00:26:32.796: INFO: Pod "pod-subpath-test-configmap-zdc7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033168141s
Aug 27 00:26:34.803: INFO: Pod "pod-subpath-test-configmap-zdc7": Phase="Running", Reason="", readiness=true. Elapsed: 4.04057802s
Aug 27 00:26:36.810: INFO: Pod "pod-subpath-test-configmap-zdc7": Phase="Running", Reason="", readiness=true. Elapsed: 6.047420873s
Aug 27 00:26:38.816: INFO: Pod "pod-subpath-test-configmap-zdc7": Phase="Running", Reason="", readiness=true. Elapsed: 8.053184471s
Aug 27 00:26:40.822: INFO: Pod "pod-subpath-test-configmap-zdc7": Phase="Running", Reason="", readiness=true. Elapsed: 10.059286741s
Aug 27 00:26:42.827: INFO: Pod "pod-subpath-test-configmap-zdc7": Phase="Running", Reason="", readiness=true. Elapsed: 12.064201112s
Aug 27 00:26:44.833: INFO: Pod "pod-subpath-test-configmap-zdc7": Phase="Running", Reason="", readiness=true. Elapsed: 14.07016589s
Aug 27 00:26:46.838: INFO: Pod "pod-subpath-test-configmap-zdc7": Phase="Running", Reason="", readiness=true. Elapsed: 16.075464917s
Aug 27 00:26:48.844: INFO: Pod "pod-subpath-test-configmap-zdc7": Phase="Running", Reason="", readiness=true. Elapsed: 18.080747921s
Aug 27 00:26:50.850: INFO: Pod "pod-subpath-test-configmap-zdc7": Phase="Running", Reason="", readiness=true. Elapsed: 20.087533779s
Aug 27 00:26:52.855: INFO: Pod "pod-subpath-test-configmap-zdc7": Phase="Running", Reason="", readiness=true. Elapsed: 22.092451193s
Aug 27 00:26:54.861: INFO: Pod "pod-subpath-test-configmap-zdc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.098525973s
STEP: Saw pod success
Aug 27 00:26:54.862: INFO: Pod "pod-subpath-test-configmap-zdc7" satisfied condition "success or failure"
Aug 27 00:26:54.865: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-zdc7 container test-container-subpath-configmap-zdc7: 
STEP: delete the pod
Aug 27 00:26:55.008: INFO: Waiting for pod pod-subpath-test-configmap-zdc7 to disappear
Aug 27 00:26:55.084: INFO: Pod pod-subpath-test-configmap-zdc7 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-zdc7
Aug 27 00:26:55.084: INFO: Deleting pod "pod-subpath-test-configmap-zdc7" in namespace "subpath-217"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:26:55.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-217" for this suite.
Aug 27 00:27:03.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:27:03.331: INFO: namespace subpath-217 deletion completed in 8.233101428s

• [SLOW TEST:32.668 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:27:03.334: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-8256
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 27 00:27:03.446: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 27 00:27:33.712: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.209:8080/dial?request=hostName&protocol=udp&host=10.244.1.208&port=8081&tries=1'] Namespace:pod-network-test-8256 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 00:27:33.713: INFO: >>> kubeConfig: /root/.kube/config
I0827 00:27:33.772193       7 log.go:172] (0x40009f3600) (0x40004dbe00) Create stream
I0827 00:27:33.772428       7 log.go:172] (0x40009f3600) (0x40004dbe00) Stream added, broadcasting: 1
I0827 00:27:33.780848       7 log.go:172] (0x40009f3600) Reply frame received for 1
I0827 00:27:33.781035       7 log.go:172] (0x40009f3600) (0x4002492000) Create stream
I0827 00:27:33.781110       7 log.go:172] (0x40009f3600) (0x4002492000) Stream added, broadcasting: 3
I0827 00:27:33.782605       7 log.go:172] (0x40009f3600) Reply frame received for 3
I0827 00:27:33.782727       7 log.go:172] (0x40009f3600) (0x4002914000) Create stream
I0827 00:27:33.782800       7 log.go:172] (0x40009f3600) (0x4002914000) Stream added, broadcasting: 5
I0827 00:27:33.784037       7 log.go:172] (0x40009f3600) Reply frame received for 5
I0827 00:27:33.842612       7 log.go:172] (0x40009f3600) Data frame received for 3
I0827 00:27:33.842772       7 log.go:172] (0x4002492000) (3) Data frame handling
I0827 00:27:33.842929       7 log.go:172] (0x4002492000) (3) Data frame sent
I0827 00:27:33.843236       7 log.go:172] (0x40009f3600) Data frame received for 5
I0827 00:27:33.843377       7 log.go:172] (0x4002914000) (5) Data frame handling
I0827 00:27:33.843470       7 log.go:172] (0x40009f3600) Data frame received for 3
I0827 00:27:33.843588       7 log.go:172] (0x4002492000) (3) Data frame handling
I0827 00:27:33.845102       7 log.go:172] (0x40009f3600) Data frame received for 1
I0827 00:27:33.845197       7 log.go:172] (0x40004dbe00) (1) Data frame handling
I0827 00:27:33.845306       7 log.go:172] (0x40004dbe00) (1) Data frame sent
I0827 00:27:33.845395       7 log.go:172] (0x40009f3600) (0x40004dbe00) Stream removed, broadcasting: 1
I0827 00:27:33.845496       7 log.go:172] (0x40009f3600) Go away received
I0827 00:27:33.845897       7 log.go:172] (0x40009f3600) (0x40004dbe00) Stream removed, broadcasting: 1
I0827 00:27:33.846074       7 log.go:172] (0x40009f3600) (0x4002492000) Stream removed, broadcasting: 3
I0827 00:27:33.846198       7 log.go:172] (0x40009f3600) (0x4002914000) Stream removed, broadcasting: 5
Aug 27 00:27:33.846: INFO: Waiting for endpoints: map[]
Aug 27 00:27:33.851: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.209:8080/dial?request=hostName&protocol=udp&host=10.244.2.139&port=8081&tries=1'] Namespace:pod-network-test-8256 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 00:27:33.851: INFO: >>> kubeConfig: /root/.kube/config
I0827 00:27:33.918706       7 log.go:172] (0x4000c2e160) (0x4003504820) Create stream
I0827 00:27:33.918921       7 log.go:172] (0x4000c2e160) (0x4003504820) Stream added, broadcasting: 1
I0827 00:27:33.923774       7 log.go:172] (0x4000c2e160) Reply frame received for 1
I0827 00:27:33.923936       7 log.go:172] (0x4000c2e160) (0x40035048c0) Create stream
I0827 00:27:33.924010       7 log.go:172] (0x4000c2e160) (0x40035048c0) Stream added, broadcasting: 3
I0827 00:27:33.925886       7 log.go:172] (0x4000c2e160) Reply frame received for 3
I0827 00:27:33.926073       7 log.go:172] (0x4000c2e160) (0x4000a63180) Create stream
I0827 00:27:33.926180       7 log.go:172] (0x4000c2e160) (0x4000a63180) Stream added, broadcasting: 5
I0827 00:27:33.928005       7 log.go:172] (0x4000c2e160) Reply frame received for 5
I0827 00:27:33.999826       7 log.go:172] (0x4000c2e160) Data frame received for 3
I0827 00:27:33.999989       7 log.go:172] (0x40035048c0) (3) Data frame handling
I0827 00:27:34.000118       7 log.go:172] (0x40035048c0) (3) Data frame sent
I0827 00:27:34.000273       7 log.go:172] (0x4000c2e160) Data frame received for 5
I0827 00:27:34.000345       7 log.go:172] (0x4000a63180) (5) Data frame handling
I0827 00:27:34.000521       7 log.go:172] (0x4000c2e160) Data frame received for 3
I0827 00:27:34.000816       7 log.go:172] (0x40035048c0) (3) Data frame handling
I0827 00:27:34.002047       7 log.go:172] (0x4000c2e160) Data frame received for 1
I0827 00:27:34.002113       7 log.go:172] (0x4003504820) (1) Data frame handling
I0827 00:27:34.002175       7 log.go:172] (0x4003504820) (1) Data frame sent
I0827 00:27:34.002252       7 log.go:172] (0x4000c2e160) (0x4003504820) Stream removed, broadcasting: 1
I0827 00:27:34.002334       7 log.go:172] (0x4000c2e160) Go away received
I0827 00:27:34.002694       7 log.go:172] (0x4000c2e160) (0x4003504820) Stream removed, broadcasting: 1
I0827 00:27:34.002784       7 log.go:172] (0x4000c2e160) (0x40035048c0) Stream removed, broadcasting: 3
I0827 00:27:34.002855       7 log.go:172] (0x4000c2e160) (0x4000a63180) Stream removed, broadcasting: 5
Aug 27 00:27:34.003: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:27:34.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8256" for this suite.
Aug 27 00:28:02.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:28:02.174: INFO: namespace pod-network-test-8256 deletion completed in 28.151408114s

• [SLOW TEST:58.841 seconds]
[sig-network] Networking
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:28:02.178: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-6861
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 27 00:28:02.395: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 27 00:28:26.685: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.210:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6861 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 00:28:26.685: INFO: >>> kubeConfig: /root/.kube/config
I0827 00:28:26.751824       7 log.go:172] (0x4001074210) (0x4002fde280) Create stream
I0827 00:28:26.752029       7 log.go:172] (0x4001074210) (0x4002fde280) Stream added, broadcasting: 1
I0827 00:28:26.756339       7 log.go:172] (0x4001074210) Reply frame received for 1
I0827 00:28:26.756514       7 log.go:172] (0x4001074210) (0x40003988c0) Create stream
I0827 00:28:26.756609       7 log.go:172] (0x4001074210) (0x40003988c0) Stream added, broadcasting: 3
I0827 00:28:26.758242       7 log.go:172] (0x4001074210) Reply frame received for 3
I0827 00:28:26.758379       7 log.go:172] (0x4001074210) (0x4002492fa0) Create stream
I0827 00:28:26.758454       7 log.go:172] (0x4001074210) (0x4002492fa0) Stream added, broadcasting: 5
I0827 00:28:26.760062       7 log.go:172] (0x4001074210) Reply frame received for 5
I0827 00:28:26.833950       7 log.go:172] (0x4001074210) Data frame received for 3
I0827 00:28:26.834147       7 log.go:172] (0x40003988c0) (3) Data frame handling
I0827 00:28:26.834295       7 log.go:172] (0x4001074210) Data frame received for 5
I0827 00:28:26.834461       7 log.go:172] (0x4002492fa0) (5) Data frame handling
I0827 00:28:26.834541       7 log.go:172] (0x40003988c0) (3) Data frame sent
I0827 00:28:26.834620       7 log.go:172] (0x4001074210) Data frame received for 3
I0827 00:28:26.834677       7 log.go:172] (0x40003988c0) (3) Data frame handling
I0827 00:28:26.835240       7 log.go:172] (0x4001074210) Data frame received for 1
I0827 00:28:26.835352       7 log.go:172] (0x4002fde280) (1) Data frame handling
I0827 00:28:26.835456       7 log.go:172] (0x4002fde280) (1) Data frame sent
I0827 00:28:26.835553       7 log.go:172] (0x4001074210) (0x4002fde280) Stream removed, broadcasting: 1
I0827 00:28:26.835677       7 log.go:172] (0x4001074210) Go away received
I0827 00:28:26.836074       7 log.go:172] (0x4001074210) (0x4002fde280) Stream removed, broadcasting: 1
I0827 00:28:26.836207       7 log.go:172] (0x4001074210) (0x40003988c0) Stream removed, broadcasting: 3
I0827 00:28:26.836303       7 log.go:172] (0x4001074210) (0x4002492fa0) Stream removed, broadcasting: 5
Aug 27 00:28:26.836: INFO: Found all expected endpoints: [netserver-0]
Aug 27 00:28:26.845: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.140:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6861 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 00:28:26.845: INFO: >>> kubeConfig: /root/.kube/config
I0827 00:28:26.900284       7 log.go:172] (0x40010409a0) (0x40032fb4a0) Create stream
I0827 00:28:26.900466       7 log.go:172] (0x40010409a0) (0x40032fb4a0) Stream added, broadcasting: 1
I0827 00:28:26.903998       7 log.go:172] (0x40010409a0) Reply frame received for 1
I0827 00:28:26.904203       7 log.go:172] (0x40010409a0) (0x4000398960) Create stream
I0827 00:28:26.904282       7 log.go:172] (0x40010409a0) (0x4000398960) Stream added, broadcasting: 3
I0827 00:28:26.906054       7 log.go:172] (0x40010409a0) Reply frame received for 3
I0827 00:28:26.906222       7 log.go:172] (0x40010409a0) (0x4001d8ef00) Create stream
I0827 00:28:26.906301       7 log.go:172] (0x40010409a0) (0x4001d8ef00) Stream added, broadcasting: 5
I0827 00:28:26.907726       7 log.go:172] (0x40010409a0) Reply frame received for 5
I0827 00:28:26.973391       7 log.go:172] (0x40010409a0) Data frame received for 5
I0827 00:28:26.973547       7 log.go:172] (0x4001d8ef00) (5) Data frame handling
I0827 00:28:26.973674       7 log.go:172] (0x40010409a0) Data frame received for 3
I0827 00:28:26.973789       7 log.go:172] (0x4000398960) (3) Data frame handling
I0827 00:28:26.973892       7 log.go:172] (0x4000398960) (3) Data frame sent
I0827 00:28:26.973972       7 log.go:172] (0x40010409a0) Data frame received for 3
I0827 00:28:26.974041       7 log.go:172] (0x4000398960) (3) Data frame handling
I0827 00:28:26.974821       7 log.go:172] (0x40010409a0) Data frame received for 1
I0827 00:28:26.974954       7 log.go:172] (0x40032fb4a0) (1) Data frame handling
I0827 00:28:26.975040       7 log.go:172] (0x40032fb4a0) (1) Data frame sent
I0827 00:28:26.975132       7 log.go:172] (0x40010409a0) (0x40032fb4a0) Stream removed, broadcasting: 1
I0827 00:28:26.975280       7 log.go:172] (0x40010409a0) Go away received
I0827 00:28:26.975858       7 log.go:172] (0x40010409a0) (0x40032fb4a0) Stream removed, broadcasting: 1
I0827 00:28:26.976018       7 log.go:172] (0x40010409a0) (0x4000398960) Stream removed, broadcasting: 3
I0827 00:28:26.976144       7 log.go:172] (0x40010409a0) (0x4001d8ef00) Stream removed, broadcasting: 5
Aug 27 00:28:26.976: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:28:26.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-6861" for this suite.
Aug 27 00:28:51.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:28:51.256: INFO: namespace pod-network-test-6861 deletion completed in 24.270228785s

• [SLOW TEST:49.079 seconds]
[sig-network] Networking
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:28:51.258: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-c5609722-0fea-4cde-8e34-02ae561d9e20
STEP: Creating a pod to test consume configMaps
Aug 27 00:28:51.408: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9993518c-703b-4972-ac93-d692a42f7c1e" in namespace "projected-6407" to be "success or failure"
Aug 27 00:28:51.454: INFO: Pod "pod-projected-configmaps-9993518c-703b-4972-ac93-d692a42f7c1e": Phase="Pending", Reason="", readiness=false. Elapsed: 45.915074ms
Aug 27 00:28:53.461: INFO: Pod "pod-projected-configmaps-9993518c-703b-4972-ac93-d692a42f7c1e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053454751s
Aug 27 00:28:55.692: INFO: Pod "pod-projected-configmaps-9993518c-703b-4972-ac93-d692a42f7c1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.284035613s
STEP: Saw pod success
Aug 27 00:28:55.692: INFO: Pod "pod-projected-configmaps-9993518c-703b-4972-ac93-d692a42f7c1e" satisfied condition "success or failure"
Aug 27 00:28:55.698: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-9993518c-703b-4972-ac93-d692a42f7c1e container projected-configmap-volume-test: 
STEP: delete the pod
Aug 27 00:28:55.930: INFO: Waiting for pod pod-projected-configmaps-9993518c-703b-4972-ac93-d692a42f7c1e to disappear
Aug 27 00:28:55.937: INFO: Pod pod-projected-configmaps-9993518c-703b-4972-ac93-d692a42f7c1e no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:28:55.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6407" for this suite.
Aug 27 00:29:01.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:29:02.103: INFO: namespace projected-6407 deletion completed in 6.156636116s

• [SLOW TEST:10.845 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:29:02.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:29:02.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7620" for this suite.
Aug 27 00:29:08.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:29:08.403: INFO: namespace kubelet-test-7620 deletion completed in 6.179674681s

• [SLOW TEST:6.297 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:29:08.405: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 27 00:29:08.514: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2708022f-b330-412e-a449-427f57eebb1c" in namespace "downward-api-9074" to be "success or failure"
Aug 27 00:29:08.549: INFO: Pod "downwardapi-volume-2708022f-b330-412e-a449-427f57eebb1c": Phase="Pending", Reason="", readiness=false. Elapsed: 34.649029ms
Aug 27 00:29:10.668: INFO: Pod "downwardapi-volume-2708022f-b330-412e-a449-427f57eebb1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153785006s
Aug 27 00:29:12.746: INFO: Pod "downwardapi-volume-2708022f-b330-412e-a449-427f57eebb1c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.231778047s
STEP: Saw pod success
Aug 27 00:29:12.746: INFO: Pod "downwardapi-volume-2708022f-b330-412e-a449-427f57eebb1c" satisfied condition "success or failure"
Aug 27 00:29:12.812: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-2708022f-b330-412e-a449-427f57eebb1c container client-container: 
STEP: delete the pod
Aug 27 00:29:13.056: INFO: Waiting for pod downwardapi-volume-2708022f-b330-412e-a449-427f57eebb1c to disappear
Aug 27 00:29:13.127: INFO: Pod downwardapi-volume-2708022f-b330-412e-a449-427f57eebb1c no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:29:13.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9074" for this suite.
Aug 27 00:29:19.337: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:29:19.459: INFO: namespace downward-api-9074 deletion completed in 6.32349108s

• [SLOW TEST:11.054 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:29:19.460: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-57b35c5c-4552-4e42-8eaa-6f36d5d9967c
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:29:25.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3094" for this suite.
Aug 27 00:29:48.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:29:48.241: INFO: namespace configmap-3094 deletion completed in 22.305961623s

• [SLOW TEST:28.781 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:29:48.242: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-2060/configmap-test-42366bd5-2d62-4a56-b45d-bbb504cca896
STEP: Creating a pod to test consume configMaps
Aug 27 00:29:48.811: INFO: Waiting up to 5m0s for pod "pod-configmaps-fb3970bd-ff92-4aa6-9b6f-e7156f7ad956" in namespace "configmap-2060" to be "success or failure"
Aug 27 00:29:48.840: INFO: Pod "pod-configmaps-fb3970bd-ff92-4aa6-9b6f-e7156f7ad956": Phase="Pending", Reason="", readiness=false. Elapsed: 28.955143ms
Aug 27 00:29:50.883: INFO: Pod "pod-configmaps-fb3970bd-ff92-4aa6-9b6f-e7156f7ad956": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071866414s
Aug 27 00:29:52.889: INFO: Pod "pod-configmaps-fb3970bd-ff92-4aa6-9b6f-e7156f7ad956": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077596717s
Aug 27 00:29:55.016: INFO: Pod "pod-configmaps-fb3970bd-ff92-4aa6-9b6f-e7156f7ad956": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.204823129s
STEP: Saw pod success
Aug 27 00:29:55.017: INFO: Pod "pod-configmaps-fb3970bd-ff92-4aa6-9b6f-e7156f7ad956" satisfied condition "success or failure"
Aug 27 00:29:55.191: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-fb3970bd-ff92-4aa6-9b6f-e7156f7ad956 container env-test: 
STEP: delete the pod
Aug 27 00:29:55.395: INFO: Waiting for pod pod-configmaps-fb3970bd-ff92-4aa6-9b6f-e7156f7ad956 to disappear
Aug 27 00:29:55.471: INFO: Pod pod-configmaps-fb3970bd-ff92-4aa6-9b6f-e7156f7ad956 no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:29:55.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2060" for this suite.
Aug 27 00:30:05.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:30:06.043: INFO: namespace configmap-2060 deletion completed in 10.560286869s

• [SLOW TEST:17.801 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:30:06.049: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-c7e7dc0c-50bc-4008-ad5e-e6f96095ca98
Aug 27 00:30:07.291: INFO: Pod name my-hostname-basic-c7e7dc0c-50bc-4008-ad5e-e6f96095ca98: Found 0 pods out of 1
Aug 27 00:30:12.411: INFO: Pod name my-hostname-basic-c7e7dc0c-50bc-4008-ad5e-e6f96095ca98: Found 1 pods out of 1
Aug 27 00:30:12.412: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-c7e7dc0c-50bc-4008-ad5e-e6f96095ca98" are running
Aug 27 00:30:14.423: INFO: Pod "my-hostname-basic-c7e7dc0c-50bc-4008-ad5e-e6f96095ca98-6ldsc" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-27 00:30:07 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-27 00:30:07 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-c7e7dc0c-50bc-4008-ad5e-e6f96095ca98]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-27 00:30:07 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-c7e7dc0c-50bc-4008-ad5e-e6f96095ca98]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-27 00:30:07 +0000 UTC Reason: Message:}])
Aug 27 00:30:14.424: INFO: Trying to dial the pod
Aug 27 00:30:19.436: INFO: Controller my-hostname-basic-c7e7dc0c-50bc-4008-ad5e-e6f96095ca98: Got expected result from replica 1 [my-hostname-basic-c7e7dc0c-50bc-4008-ad5e-e6f96095ca98-6ldsc]: "my-hostname-basic-c7e7dc0c-50bc-4008-ad5e-e6f96095ca98-6ldsc", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:30:19.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8354" for this suite.
Aug 27 00:30:25.459: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:30:25.607: INFO: namespace replication-controller-8354 deletion completed in 6.163498198s

• [SLOW TEST:19.558 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:30:25.609: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Aug 27 00:30:33.137: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:30:34.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-2064" for this suite.
Aug 27 00:30:56.447: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:30:56.585: INFO: namespace replicaset-2064 deletion completed in 22.417855955s

• [SLOW TEST:30.976 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:30:56.588: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Aug 27 00:30:56.691: INFO: Waiting up to 5m0s for pod "pod-6a1b120b-dce7-4d5c-aa13-fa667b7ca6a9" in namespace "emptydir-9242" to be "success or failure"
Aug 27 00:30:56.701: INFO: Pod "pod-6a1b120b-dce7-4d5c-aa13-fa667b7ca6a9": Phase="Pending", Reason="", readiness=false. Elapsed: 9.639053ms
Aug 27 00:30:58.871: INFO: Pod "pod-6a1b120b-dce7-4d5c-aa13-fa667b7ca6a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.180307218s
Aug 27 00:31:00.878: INFO: Pod "pod-6a1b120b-dce7-4d5c-aa13-fa667b7ca6a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.1871327s
STEP: Saw pod success
Aug 27 00:31:00.878: INFO: Pod "pod-6a1b120b-dce7-4d5c-aa13-fa667b7ca6a9" satisfied condition "success or failure"
Aug 27 00:31:00.883: INFO: Trying to get logs from node iruya-worker2 pod pod-6a1b120b-dce7-4d5c-aa13-fa667b7ca6a9 container test-container: 
STEP: delete the pod
Aug 27 00:31:01.072: INFO: Waiting for pod pod-6a1b120b-dce7-4d5c-aa13-fa667b7ca6a9 to disappear
Aug 27 00:31:01.077: INFO: Pod pod-6a1b120b-dce7-4d5c-aa13-fa667b7ca6a9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:31:01.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9242" for this suite.
Aug 27 00:31:07.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:31:07.330: INFO: namespace emptydir-9242 deletion completed in 6.243538468s

• [SLOW TEST:10.741 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:31:07.331: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Aug 27 00:31:07.525: INFO: Waiting up to 5m0s for pod "client-containers-4fcf5cc6-882c-4352-bd0d-908e9b52c07e" in namespace "containers-5937" to be "success or failure"
Aug 27 00:31:07.552: INFO: Pod "client-containers-4fcf5cc6-882c-4352-bd0d-908e9b52c07e": Phase="Pending", Reason="", readiness=false. Elapsed: 27.309867ms
Aug 27 00:31:09.657: INFO: Pod "client-containers-4fcf5cc6-882c-4352-bd0d-908e9b52c07e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132351644s
Aug 27 00:31:11.891: INFO: Pod "client-containers-4fcf5cc6-882c-4352-bd0d-908e9b52c07e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.365902518s
Aug 27 00:31:13.896: INFO: Pod "client-containers-4fcf5cc6-882c-4352-bd0d-908e9b52c07e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.371661346s
STEP: Saw pod success
Aug 27 00:31:13.897: INFO: Pod "client-containers-4fcf5cc6-882c-4352-bd0d-908e9b52c07e" satisfied condition "success or failure"
Aug 27 00:31:13.901: INFO: Trying to get logs from node iruya-worker pod client-containers-4fcf5cc6-882c-4352-bd0d-908e9b52c07e container test-container: 
STEP: delete the pod
Aug 27 00:31:14.132: INFO: Waiting for pod client-containers-4fcf5cc6-882c-4352-bd0d-908e9b52c07e to disappear
Aug 27 00:31:14.495: INFO: Pod client-containers-4fcf5cc6-882c-4352-bd0d-908e9b52c07e no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:31:14.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-5937" for this suite.
Aug 27 00:31:22.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:31:23.228: INFO: namespace containers-5937 deletion completed in 8.723627434s

• [SLOW TEST:15.898 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:31:23.230: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-c444977b-e868-4b86-b72c-07318c144234
STEP: Creating a pod to test consume secrets
Aug 27 00:31:23.937: INFO: Waiting up to 5m0s for pod "pod-secrets-37b949cf-6a97-4582-8130-afb878420c3d" in namespace "secrets-5904" to be "success or failure"
Aug 27 00:31:24.190: INFO: Pod "pod-secrets-37b949cf-6a97-4582-8130-afb878420c3d": Phase="Pending", Reason="", readiness=false. Elapsed: 253.573958ms
Aug 27 00:31:26.366: INFO: Pod "pod-secrets-37b949cf-6a97-4582-8130-afb878420c3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.429544296s
Aug 27 00:31:28.371: INFO: Pod "pod-secrets-37b949cf-6a97-4582-8130-afb878420c3d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434758918s
Aug 27 00:31:30.377: INFO: Pod "pod-secrets-37b949cf-6a97-4582-8130-afb878420c3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.440498674s
STEP: Saw pod success
Aug 27 00:31:30.377: INFO: Pod "pod-secrets-37b949cf-6a97-4582-8130-afb878420c3d" satisfied condition "success or failure"
Aug 27 00:31:30.382: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-37b949cf-6a97-4582-8130-afb878420c3d container secret-volume-test: 
STEP: delete the pod
Aug 27 00:31:30.442: INFO: Waiting for pod pod-secrets-37b949cf-6a97-4582-8130-afb878420c3d to disappear
Aug 27 00:31:30.481: INFO: Pod pod-secrets-37b949cf-6a97-4582-8130-afb878420c3d no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:31:30.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5904" for this suite.
Aug 27 00:31:36.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:31:36.673: INFO: namespace secrets-5904 deletion completed in 6.184674976s

• [SLOW TEST:13.443 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:31:36.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Aug 27 00:31:36.809: INFO: Waiting up to 5m0s for pod "client-containers-ecd30d65-64f4-4276-a6db-6253d9ad70a8" in namespace "containers-5064" to be "success or failure"
Aug 27 00:31:36.837: INFO: Pod "client-containers-ecd30d65-64f4-4276-a6db-6253d9ad70a8": Phase="Pending", Reason="", readiness=false. Elapsed: 27.51826ms
Aug 27 00:31:38.843: INFO: Pod "client-containers-ecd30d65-64f4-4276-a6db-6253d9ad70a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033081358s
Aug 27 00:31:40.849: INFO: Pod "client-containers-ecd30d65-64f4-4276-a6db-6253d9ad70a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039007561s
STEP: Saw pod success
Aug 27 00:31:40.849: INFO: Pod "client-containers-ecd30d65-64f4-4276-a6db-6253d9ad70a8" satisfied condition "success or failure"
Aug 27 00:31:40.853: INFO: Trying to get logs from node iruya-worker2 pod client-containers-ecd30d65-64f4-4276-a6db-6253d9ad70a8 container test-container: 
STEP: delete the pod
Aug 27 00:31:40.875: INFO: Waiting for pod client-containers-ecd30d65-64f4-4276-a6db-6253d9ad70a8 to disappear
Aug 27 00:31:40.986: INFO: Pod client-containers-ecd30d65-64f4-4276-a6db-6253d9ad70a8 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:31:40.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-5064" for this suite.
Aug 27 00:31:47.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:31:47.322: INFO: namespace containers-5064 deletion completed in 6.329664449s

• [SLOW TEST:10.646 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:31:47.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 27 00:31:47.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-6679'
Aug 27 00:31:52.497: INFO: stderr: ""
Aug 27 00:31:52.497: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Aug 27 00:31:52.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-6679'
Aug 27 00:32:03.649: INFO: stderr: ""
Aug 27 00:32:03.650: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:32:03.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6679" for this suite.
Aug 27 00:32:09.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:32:09.923: INFO: namespace kubectl-6679 deletion completed in 6.262714226s

• [SLOW TEST:22.598 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:32:09.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Aug 27 00:32:10.254: INFO: Waiting up to 5m0s for pod "var-expansion-f5f3fb06-28f0-4a98-bbfc-c08ec1540b28" in namespace "var-expansion-807" to be "success or failure"
Aug 27 00:32:10.270: INFO: Pod "var-expansion-f5f3fb06-28f0-4a98-bbfc-c08ec1540b28": Phase="Pending", Reason="", readiness=false. Elapsed: 15.207011ms
Aug 27 00:32:12.317: INFO: Pod "var-expansion-f5f3fb06-28f0-4a98-bbfc-c08ec1540b28": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062983725s
Aug 27 00:32:14.335: INFO: Pod "var-expansion-f5f3fb06-28f0-4a98-bbfc-c08ec1540b28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.080701019s
STEP: Saw pod success
Aug 27 00:32:14.335: INFO: Pod "var-expansion-f5f3fb06-28f0-4a98-bbfc-c08ec1540b28" satisfied condition "success or failure"
Aug 27 00:32:14.340: INFO: Trying to get logs from node iruya-worker pod var-expansion-f5f3fb06-28f0-4a98-bbfc-c08ec1540b28 container dapi-container: 
STEP: delete the pod
Aug 27 00:32:14.374: INFO: Waiting for pod var-expansion-f5f3fb06-28f0-4a98-bbfc-c08ec1540b28 to disappear
Aug 27 00:32:14.382: INFO: Pod var-expansion-f5f3fb06-28f0-4a98-bbfc-c08ec1540b28 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:32:14.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-807" for this suite.
Aug 27 00:32:20.417: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:32:20.606: INFO: namespace var-expansion-807 deletion completed in 6.216643223s

• [SLOW TEST:10.683 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:32:20.609: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 27 00:32:20.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-3479'
Aug 27 00:32:22.079: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 27 00:32:22.080: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Aug 27 00:32:22.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-3479'
Aug 27 00:32:23.435: INFO: stderr: ""
Aug 27 00:32:23.436: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:32:23.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3479" for this suite.
Aug 27 00:32:29.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:32:29.857: INFO: namespace kubectl-3479 deletion completed in 6.411552669s

• [SLOW TEST:9.248 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:32:29.863: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 27 00:32:47.987: INFO: Container started at 2020-08-27 00:32:32 +0000 UTC, pod became ready at 2020-08-27 00:32:47 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:32:47.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4963" for this suite.
Aug 27 00:33:10.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:33:10.142: INFO: namespace container-probe-4963 deletion completed in 22.145017887s

• [SLOW TEST:40.280 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:33:10.143: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0827 00:33:21.434665       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 27 00:33:21.434: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:33:21.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-335" for this suite.
Aug 27 00:33:29.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:33:29.677: INFO: namespace gc-335 deletion completed in 8.236703575s

• [SLOW TEST:19.534 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:33:29.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-80f3d332-823d-4aeb-8755-f07fb51700d6
STEP: Creating configMap with name cm-test-opt-upd-5123c455-81c8-4031-8475-05f79b9fbfb9
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-80f3d332-823d-4aeb-8755-f07fb51700d6
STEP: Updating configmap cm-test-opt-upd-5123c455-81c8-4031-8475-05f79b9fbfb9
STEP: Creating configMap with name cm-test-opt-create-ca027f8a-faaa-4fd2-9a99-7feb3648c53a
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:33:53.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8682" for this suite.
Aug 27 00:34:21.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:34:21.418: INFO: namespace configmap-8682 deletion completed in 28.216390482s

• [SLOW TEST:51.739 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:34:21.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 27 00:34:21.516: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e65459b1-fb20-4e84-9e18-22c52659c14d" in namespace "downward-api-2285" to be "success or failure"
Aug 27 00:34:21.532: INFO: Pod "downwardapi-volume-e65459b1-fb20-4e84-9e18-22c52659c14d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.901208ms
Aug 27 00:34:23.563: INFO: Pod "downwardapi-volume-e65459b1-fb20-4e84-9e18-22c52659c14d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047245739s
Aug 27 00:34:25.570: INFO: Pod "downwardapi-volume-e65459b1-fb20-4e84-9e18-22c52659c14d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054106646s
STEP: Saw pod success
Aug 27 00:34:25.570: INFO: Pod "downwardapi-volume-e65459b1-fb20-4e84-9e18-22c52659c14d" satisfied condition "success or failure"
Aug 27 00:34:25.579: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-e65459b1-fb20-4e84-9e18-22c52659c14d container client-container: 
STEP: delete the pod
Aug 27 00:34:25.655: INFO: Waiting for pod downwardapi-volume-e65459b1-fb20-4e84-9e18-22c52659c14d to disappear
Aug 27 00:34:25.658: INFO: Pod downwardapi-volume-e65459b1-fb20-4e84-9e18-22c52659c14d no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:34:25.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2285" for this suite.
Aug 27 00:34:31.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:34:31.814: INFO: namespace downward-api-2285 deletion completed in 6.148241017s

• [SLOW TEST:10.394 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:34:31.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 27 00:34:31.914: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9f0c33ee-b530-45f1-99a4-29b44f164c4d" in namespace "projected-3429" to be "success or failure"
Aug 27 00:34:31.930: INFO: Pod "downwardapi-volume-9f0c33ee-b530-45f1-99a4-29b44f164c4d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.56966ms
Aug 27 00:34:33.938: INFO: Pod "downwardapi-volume-9f0c33ee-b530-45f1-99a4-29b44f164c4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023834428s
Aug 27 00:34:35.949: INFO: Pod "downwardapi-volume-9f0c33ee-b530-45f1-99a4-29b44f164c4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035390002s
STEP: Saw pod success
Aug 27 00:34:35.949: INFO: Pod "downwardapi-volume-9f0c33ee-b530-45f1-99a4-29b44f164c4d" satisfied condition "success or failure"
Aug 27 00:34:35.954: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-9f0c33ee-b530-45f1-99a4-29b44f164c4d container client-container: 
STEP: delete the pod
Aug 27 00:34:36.059: INFO: Waiting for pod downwardapi-volume-9f0c33ee-b530-45f1-99a4-29b44f164c4d to disappear
Aug 27 00:34:36.062: INFO: Pod downwardapi-volume-9f0c33ee-b530-45f1-99a4-29b44f164c4d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:34:36.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3429" for this suite.
Aug 27 00:34:42.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:34:42.212: INFO: namespace projected-3429 deletion completed in 6.142476529s

• [SLOW TEST:10.396 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:34:42.217: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Aug 27 00:34:42.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5949'
Aug 27 00:34:45.028: INFO: stderr: ""
Aug 27 00:34:45.028: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Aug 27 00:34:46.044: INFO: Selector matched 1 pods for map[app:redis]
Aug 27 00:34:46.044: INFO: Found 0 / 1
Aug 27 00:34:47.037: INFO: Selector matched 1 pods for map[app:redis]
Aug 27 00:34:47.037: INFO: Found 0 / 1
Aug 27 00:34:48.397: INFO: Selector matched 1 pods for map[app:redis]
Aug 27 00:34:48.398: INFO: Found 0 / 1
Aug 27 00:34:49.036: INFO: Selector matched 1 pods for map[app:redis]
Aug 27 00:34:49.037: INFO: Found 0 / 1
Aug 27 00:34:50.091: INFO: Selector matched 1 pods for map[app:redis]
Aug 27 00:34:50.092: INFO: Found 0 / 1
Aug 27 00:34:51.036: INFO: Selector matched 1 pods for map[app:redis]
Aug 27 00:34:51.036: INFO: Found 0 / 1
Aug 27 00:34:52.438: INFO: Selector matched 1 pods for map[app:redis]
Aug 27 00:34:52.438: INFO: Found 0 / 1
Aug 27 00:34:53.242: INFO: Selector matched 1 pods for map[app:redis]
Aug 27 00:34:53.242: INFO: Found 0 / 1
Aug 27 00:34:54.036: INFO: Selector matched 1 pods for map[app:redis]
Aug 27 00:34:54.036: INFO: Found 1 / 1
Aug 27 00:34:54.036: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Aug 27 00:34:54.041: INFO: Selector matched 1 pods for map[app:redis]
Aug 27 00:34:54.042: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 27 00:34:54.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-rdl4v --namespace=kubectl-5949 -p {"metadata":{"annotations":{"x":"y"}}}'
Aug 27 00:34:55.316: INFO: stderr: ""
Aug 27 00:34:55.316: INFO: stdout: "pod/redis-master-rdl4v patched\n"
STEP: checking annotations
Aug 27 00:34:55.355: INFO: Selector matched 1 pods for map[app:redis]
Aug 27 00:34:55.355: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:34:55.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5949" for this suite.
Aug 27 00:35:17.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:35:17.533: INFO: namespace kubectl-5949 deletion completed in 22.171761726s

• [SLOW TEST:35.316 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:35:17.536: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-3f7b0d30-f0e3-4856-8c7d-e498427879a6
STEP: Creating a pod to test consume configMaps
Aug 27 00:35:17.838: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-393f603d-e290-4b9f-a132-5ce647a1f0d4" in namespace "projected-9025" to be "success or failure"
Aug 27 00:35:17.858: INFO: Pod "pod-projected-configmaps-393f603d-e290-4b9f-a132-5ce647a1f0d4": Phase="Pending", Reason="", readiness=false. Elapsed: 19.650155ms
Aug 27 00:35:19.866: INFO: Pod "pod-projected-configmaps-393f603d-e290-4b9f-a132-5ce647a1f0d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02761785s
Aug 27 00:35:22.183: INFO: Pod "pod-projected-configmaps-393f603d-e290-4b9f-a132-5ce647a1f0d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.345233157s
Aug 27 00:35:24.211: INFO: Pod "pod-projected-configmaps-393f603d-e290-4b9f-a132-5ce647a1f0d4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.372768352s
Aug 27 00:35:26.218: INFO: Pod "pod-projected-configmaps-393f603d-e290-4b9f-a132-5ce647a1f0d4": Phase="Running", Reason="", readiness=true. Elapsed: 8.380201186s
Aug 27 00:35:28.225: INFO: Pod "pod-projected-configmaps-393f603d-e290-4b9f-a132-5ce647a1f0d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.387323201s
STEP: Saw pod success
Aug 27 00:35:28.225: INFO: Pod "pod-projected-configmaps-393f603d-e290-4b9f-a132-5ce647a1f0d4" satisfied condition "success or failure"
Aug 27 00:35:28.230: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-393f603d-e290-4b9f-a132-5ce647a1f0d4 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 27 00:35:28.315: INFO: Waiting for pod pod-projected-configmaps-393f603d-e290-4b9f-a132-5ce647a1f0d4 to disappear
Aug 27 00:35:28.325: INFO: Pod pod-projected-configmaps-393f603d-e290-4b9f-a132-5ce647a1f0d4 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:35:28.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9025" for this suite.
Aug 27 00:35:36.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:35:37.675: INFO: namespace projected-9025 deletion completed in 9.34358907s

• [SLOW TEST:20.139 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:35:37.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-362/configmap-test-48d7c2de-2100-488e-9e3c-dc1a207bdeca
STEP: Creating a pod to test consume configMaps
Aug 27 00:35:38.622: INFO: Waiting up to 5m0s for pod "pod-configmaps-e0ca0138-6e37-47e0-b057-f8ab1e1dc15d" in namespace "configmap-362" to be "success or failure"
Aug 27 00:35:38.930: INFO: Pod "pod-configmaps-e0ca0138-6e37-47e0-b057-f8ab1e1dc15d": Phase="Pending", Reason="", readiness=false. Elapsed: 307.966524ms
Aug 27 00:35:40.938: INFO: Pod "pod-configmaps-e0ca0138-6e37-47e0-b057-f8ab1e1dc15d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.315239591s
Aug 27 00:35:42.960: INFO: Pod "pod-configmaps-e0ca0138-6e37-47e0-b057-f8ab1e1dc15d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.337313176s
Aug 27 00:35:44.967: INFO: Pod "pod-configmaps-e0ca0138-6e37-47e0-b057-f8ab1e1dc15d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.344208537s
STEP: Saw pod success
Aug 27 00:35:44.967: INFO: Pod "pod-configmaps-e0ca0138-6e37-47e0-b057-f8ab1e1dc15d" satisfied condition "success or failure"
Aug 27 00:35:44.972: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-e0ca0138-6e37-47e0-b057-f8ab1e1dc15d container env-test: 
STEP: delete the pod
Aug 27 00:35:45.225: INFO: Waiting for pod pod-configmaps-e0ca0138-6e37-47e0-b057-f8ab1e1dc15d to disappear
Aug 27 00:35:45.272: INFO: Pod pod-configmaps-e0ca0138-6e37-47e0-b057-f8ab1e1dc15d no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:35:45.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-362" for this suite.
Aug 27 00:35:51.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:35:51.530: INFO: namespace configmap-362 deletion completed in 6.250774941s

• [SLOW TEST:13.854 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:35:51.531: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Aug 27 00:35:51.680: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-8940,SelfLink:/api/v1/namespaces/watch-8940/configmaps/e2e-watch-test-resource-version,UID:70c477bc-a2d6-4c83-9341-1672075cdd72,ResourceVersion:3057574,Generation:0,CreationTimestamp:2020-08-27 00:35:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 27 00:35:51.681: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-8940,SelfLink:/api/v1/namespaces/watch-8940/configmaps/e2e-watch-test-resource-version,UID:70c477bc-a2d6-4c83-9341-1672075cdd72,ResourceVersion:3057575,Generation:0,CreationTimestamp:2020-08-27 00:35:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:35:51.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8940" for this suite.
Aug 27 00:35:57.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:35:58.011: INFO: namespace watch-8940 deletion completed in 6.311113916s

• [SLOW TEST:6.480 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:35:58.014: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Aug 27 00:35:58.100: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 27 00:35:58.127: INFO: Waiting for terminating namespaces to be deleted...
Aug 27 00:35:58.158: INFO: 
Logging pods the kubelet thinks is on node iruya-worker before test
Aug 27 00:35:58.172: INFO: daemon-set-2gkvj from daemonsets-205 started at 2020-08-22 15:09:24 +0000 UTC (1 container statuses recorded)
Aug 27 00:35:58.173: INFO: 	Container app ready: true, restart count 0
Aug 27 00:35:58.173: INFO: kube-proxy-5zw8s from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded)
Aug 27 00:35:58.173: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 27 00:35:58.173: INFO: daemon-set-qwbvn from daemonsets-4407 started at 2020-08-24 03:43:04 +0000 UTC (1 container statuses recorded)
Aug 27 00:35:58.173: INFO: 	Container app ready: true, restart count 0
Aug 27 00:35:58.173: INFO: kindnet-nkf5n from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded)
Aug 27 00:35:58.173: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 27 00:35:58.173: INFO: daemon-set-6z8rp from daemonsets-4068 started at 2020-08-25 22:38:22 +0000 UTC (1 container statuses recorded)
Aug 27 00:35:58.173: INFO: 	Container app ready: true, restart count 0
Aug 27 00:35:58.173: INFO: 
Logging pods the kubelet thinks is on node iruya-worker2 before test
Aug 27 00:35:58.187: INFO: kube-proxy-b98qt from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded)
Aug 27 00:35:58.187: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 27 00:35:58.187: INFO: daemon-set-nk8hf from daemonsets-4407 started at 2020-08-24 03:43:05 +0000 UTC (1 container statuses recorded)
Aug 27 00:35:58.187: INFO: 	Container app ready: true, restart count 0
Aug 27 00:35:58.187: INFO: kindnet-xsdzz from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded)
Aug 27 00:35:58.187: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 27 00:35:58.187: INFO: daemon-set-hlzh5 from daemonsets-205 started at 2020-08-22 15:09:24 +0000 UTC (1 container statuses recorded)
Aug 27 00:35:58.187: INFO: 	Container app ready: true, restart count 0
Aug 27 00:35:58.187: INFO: daemon-set-fzgmk from daemonsets-4068 started at 2020-08-25 22:38:22 +0000 UTC (1 container statuses recorded)
Aug 27 00:35:58.187: INFO: 	Container app ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.162ef8dfe2a30b87], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:35:59.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1137" for this suite.
Aug 27 00:36:05.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:36:05.429: INFO: namespace sched-pred-1137 deletion completed in 6.181794394s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.415 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:36:05.434: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Aug 27 00:36:10.174: INFO: Successfully updated pod "annotationupdate29da66ad-6d3b-479a-92f8-02bc7a4dfdd0"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:36:12.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1503" for this suite.
Aug 27 00:36:34.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:36:34.530: INFO: namespace downward-api-1503 deletion completed in 22.186834261s

• [SLOW TEST:29.096 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:36:34.531: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug 27 00:36:34.725: INFO: Waiting up to 5m0s for pod "pod-95c0a053-7b93-4b72-be46-302e8c82bcb1" in namespace "emptydir-3891" to be "success or failure"
Aug 27 00:36:34.740: INFO: Pod "pod-95c0a053-7b93-4b72-be46-302e8c82bcb1": Phase="Pending", Reason="", readiness=false. Elapsed: 15.172688ms
Aug 27 00:36:37.002: INFO: Pod "pod-95c0a053-7b93-4b72-be46-302e8c82bcb1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.277607714s
Aug 27 00:36:39.010: INFO: Pod "pod-95c0a053-7b93-4b72-be46-302e8c82bcb1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.285095627s
Aug 27 00:36:41.016: INFO: Pod "pod-95c0a053-7b93-4b72-be46-302e8c82bcb1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.29159815s
STEP: Saw pod success
Aug 27 00:36:41.017: INFO: Pod "pod-95c0a053-7b93-4b72-be46-302e8c82bcb1" satisfied condition "success or failure"
Aug 27 00:36:41.021: INFO: Trying to get logs from node iruya-worker pod pod-95c0a053-7b93-4b72-be46-302e8c82bcb1 container test-container: 
STEP: delete the pod
Aug 27 00:36:41.220: INFO: Waiting for pod pod-95c0a053-7b93-4b72-be46-302e8c82bcb1 to disappear
Aug 27 00:36:41.464: INFO: Pod pod-95c0a053-7b93-4b72-be46-302e8c82bcb1 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:36:41.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3891" for this suite.
Aug 27 00:36:47.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:36:47.744: INFO: namespace emptydir-3891 deletion completed in 6.271900402s

• [SLOW TEST:13.213 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:36:47.746: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 27 00:36:47.837: INFO: Creating deployment "nginx-deployment"
Aug 27 00:36:47.973: INFO: Waiting for observed generation 1
Aug 27 00:36:50.681: INFO: Waiting for all required pods to come up
Aug 27 00:36:51.815: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Aug 27 00:37:08.845: INFO: Waiting for deployment "nginx-deployment" to complete
Aug 27 00:37:08.890: INFO: Updating deployment "nginx-deployment" with a non-existent image
Aug 27 00:37:08.898: INFO: Updating deployment nginx-deployment
Aug 27 00:37:08.898: INFO: Waiting for observed generation 2
Aug 27 00:37:11.337: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Aug 27 00:37:12.118: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Aug 27 00:37:12.430: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Aug 27 00:37:13.309: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Aug 27 00:37:13.309: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Aug 27 00:37:13.339: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Aug 27 00:37:13.345: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Aug 27 00:37:13.345: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Aug 27 00:37:13.353: INFO: Updating deployment nginx-deployment
Aug 27 00:37:13.353: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Aug 27 00:37:14.004: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Aug 27 00:37:17.094: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Aug 27 00:37:18.613: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-6389,SelfLink:/apis/apps/v1/namespaces/deployment-6389/deployments/nginx-deployment,UID:92991c1d-4888-4201-b645-97f660b0efc6,ResourceVersion:3058048,Generation:3,CreationTimestamp:2020-08-27 00:36:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-08-27 00:37:13 +0000 UTC 2020-08-27 00:37:13 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-08-27 00:37:15 +0000 UTC 2020-08-27 00:36:47 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},}

Aug 27 00:37:20.016: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-6389,SelfLink:/apis/apps/v1/namespaces/deployment-6389/replicasets/nginx-deployment-55fb7cb77f,UID:1de127fd-6363-4af9-86c4-1f9a8908134b,ResourceVersion:3058025,Generation:3,CreationTimestamp:2020-08-27 00:37:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 92991c1d-4888-4201-b645-97f660b0efc6 0x4003b5a617 0x4003b5a618}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug 27 00:37:20.016: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Aug 27 00:37:20.017: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-6389,SelfLink:/apis/apps/v1/namespaces/deployment-6389/replicasets/nginx-deployment-7b8c6f4498,UID:f090411e-d2bf-42a2-8daf-e318a83a4eb5,ResourceVersion:3058039,Generation:3,CreationTimestamp:2020-08-27 00:36:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 92991c1d-4888-4201-b645-97f660b0efc6 0x4003b5a6e7 0x4003b5a6e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Aug 27 00:37:20.460: INFO: Pod "nginx-deployment-55fb7cb77f-2rmz8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-2rmz8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6389,SelfLink:/api/v1/namespaces/deployment-6389/pods/nginx-deployment-55fb7cb77f-2rmz8,UID:84c2f0cd-9fa6-457e-9b18-dfe2903a064e,ResourceVersion:3058012,Generation:0,CreationTimestamp:2020-08-27 00:37:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1de127fd-6363-4af9-86c4-1f9a8908134b 0x4002dcb327 0x4002dcb328}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-47hqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-47hqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-47hqx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4002dcb3a0} {node.kubernetes.io/unreachable Exists  NoExecute 0x4002dcb3c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:14 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 27 00:37:20.461: INFO: Pod "nginx-deployment-55fb7cb77f-72rnh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-72rnh,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6389,SelfLink:/api/v1/namespaces/deployment-6389/pods/nginx-deployment-55fb7cb77f-72rnh,UID:f9c6c1c5-ec29-49a4-a9c0-e3183b18d083,ResourceVersion:3057957,Generation:0,CreationTimestamp:2020-08-27 00:37:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1de127fd-6363-4af9-86c4-1f9a8908134b 0x4002dcb447 0x4002dcb448}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-47hqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-47hqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-47hqx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4002dcb4c0} {node.kubernetes.io/unreachable Exists  NoExecute 0x4002dcb4e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:09 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-27 00:37:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 27 00:37:20.462: INFO: Pod "nginx-deployment-55fb7cb77f-7tqcj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7tqcj,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6389,SelfLink:/api/v1/namespaces/deployment-6389/pods/nginx-deployment-55fb7cb77f-7tqcj,UID:5356c723-a026-4442-8b19-18a3db7bba95,ResourceVersion:3058083,Generation:0,CreationTimestamp:2020-08-27 00:37:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1de127fd-6363-4af9-86c4-1f9a8908134b 0x4002dcb5b0 0x4002dcb5b1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-47hqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-47hqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-47hqx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4002dcb630} {node.kubernetes.io/unreachable Exists  NoExecute 0x4002dcb650}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:14 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-27 00:37:15 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 27 00:37:20.463: INFO: Pod "nginx-deployment-55fb7cb77f-8kxvz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8kxvz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6389,SelfLink:/api/v1/namespaces/deployment-6389/pods/nginx-deployment-55fb7cb77f-8kxvz,UID:31aa98d3-c8ab-4a6f-9e0e-fd0dafb746ba,ResourceVersion:3058022,Generation:0,CreationTimestamp:2020-08-27 00:37:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1de127fd-6363-4af9-86c4-1f9a8908134b 0x4002dcb720 0x4002dcb721}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-47hqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-47hqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-47hqx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4002dcb7a0} {node.kubernetes.io/unreachable Exists  NoExecute 0x4002dcb7c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:15 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 27 00:37:20.465: INFO: Pod "nginx-deployment-55fb7cb77f-cwnvb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-cwnvb,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6389,SelfLink:/api/v1/namespaces/deployment-6389/pods/nginx-deployment-55fb7cb77f-cwnvb,UID:97e1afcb-cc7b-4dbe-87fe-063d64c9af22,ResourceVersion:3058015,Generation:0,CreationTimestamp:2020-08-27 00:37:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1de127fd-6363-4af9-86c4-1f9a8908134b 0x4002dcb847 0x4002dcb848}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-47hqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-47hqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-47hqx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4002dcb8c0} {node.kubernetes.io/unreachable Exists  NoExecute 0x4002dcb8e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:14 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 27 00:37:20.466: INFO: Pod "nginx-deployment-55fb7cb77f-lz9k8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-lz9k8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6389,SelfLink:/api/v1/namespaces/deployment-6389/pods/nginx-deployment-55fb7cb77f-lz9k8,UID:777ec477-e1fe-4a11-a9da-4221723f756e,ResourceVersion:3058010,Generation:0,CreationTimestamp:2020-08-27 00:37:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1de127fd-6363-4af9-86c4-1f9a8908134b 0x4002dcb967 0x4002dcb968}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-47hqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-47hqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-47hqx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4002dcb9e0} {node.kubernetes.io/unreachable Exists  NoExecute 0x4002dcba00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:14 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 27 00:37:20.467: INFO: Pod "nginx-deployment-55fb7cb77f-nd87x" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-nd87x,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6389,SelfLink:/api/v1/namespaces/deployment-6389/pods/nginx-deployment-55fb7cb77f-nd87x,UID:de34cf6f-12b4-474a-893a-c8eeacb8ce43,ResourceVersion:3057956,Generation:0,CreationTimestamp:2020-08-27 00:37:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1de127fd-6363-4af9-86c4-1f9a8908134b 0x4002dcba87 0x4002dcba88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-47hqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-47hqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-47hqx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4002dcbb20} {node.kubernetes.io/unreachable Exists  NoExecute 0x4002dcbb40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:11 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-27 00:37:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 27 00:37:20.469: INFO: Pod "nginx-deployment-55fb7cb77f-ng5wq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-ng5wq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6389,SelfLink:/api/v1/namespaces/deployment-6389/pods/nginx-deployment-55fb7cb77f-ng5wq,UID:7d6a9816-d47c-41fe-846f-4b91abcd626b,ResourceVersion:3058043,Generation:0,CreationTimestamp:2020-08-27 00:37:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1de127fd-6363-4af9-86c4-1f9a8908134b 0x4002dcbc10 0x4002dcbc11}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-47hqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-47hqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-47hqx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4002dcbca0} {node.kubernetes.io/unreachable Exists  NoExecute 0x4002dcbcc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:14 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-27 00:37:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 27 00:37:20.469: INFO: Pod "nginx-deployment-55fb7cb77f-swxsd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-swxsd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6389,SelfLink:/api/v1/namespaces/deployment-6389/pods/nginx-deployment-55fb7cb77f-swxsd,UID:1a408904-cbbc-4732-aa07-cb1e7d1ee30f,ResourceVersion:3058014,Generation:0,CreationTimestamp:2020-08-27 00:37:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1de127fd-6363-4af9-86c4-1f9a8908134b 0x4002dcbda0 0x4002dcbda1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-47hqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-47hqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-47hqx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4002dcbe20} {node.kubernetes.io/unreachable Exists  NoExecute 0x4002dcbe40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:14 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 27 00:37:20.470: INFO: Pod "nginx-deployment-55fb7cb77f-tw7jh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-tw7jh,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6389,SelfLink:/api/v1/namespaces/deployment-6389/pods/nginx-deployment-55fb7cb77f-tw7jh,UID:978e9ff1-5d2f-43d3-9bed-8508ce3f2f2b,ResourceVersion:3058084,Generation:0,CreationTimestamp:2020-08-27 00:37:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1de127fd-6363-4af9-86c4-1f9a8908134b 0x4002dcbec7 0x4002dcbec8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-47hqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-47hqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-47hqx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4002dcbf40} {node.kubernetes.io/unreachable Exists  NoExecute 0x4002dcbf60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:14 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-27 00:37:15 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 27 00:37:20.471: INFO: Pod "nginx-deployment-55fb7cb77f-wjdqf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-wjdqf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6389,SelfLink:/api/v1/namespaces/deployment-6389/pods/nginx-deployment-55fb7cb77f-wjdqf,UID:40536bf6-d47a-4635-8abe-9b8cb5f030fd,ResourceVersion:3057959,Generation:0,CreationTimestamp:2020-08-27 00:37:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1de127fd-6363-4af9-86c4-1f9a8908134b 0x4003c74030 0x4003c74031}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-47hqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-47hqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-47hqx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4003c740b0} {node.kubernetes.io/unreachable Exists  NoExecute 0x4003c740d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:11 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-27 00:37:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 27 00:37:20.472: INFO: Pod "nginx-deployment-55fb7cb77f-xfdbf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-xfdbf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6389,SelfLink:/api/v1/namespaces/deployment-6389/pods/nginx-deployment-55fb7cb77f-xfdbf,UID:2490e3e8-a79c-4802-8b6d-b0304f31ff57,ResourceVersion:3058055,Generation:0,CreationTimestamp:2020-08-27 00:37:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1de127fd-6363-4af9-86c4-1f9a8908134b 0x4003c741a0 0x4003c741a1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-47hqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-47hqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-47hqx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4003c74220} {node.kubernetes.io/unreachable Exists  NoExecute 0x4003c74240}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:09 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.168,StartTime:2020-08-27 00:37:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 27 00:37:20.473: INFO: Pod "nginx-deployment-55fb7cb77f-zq592" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-zq592,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6389,SelfLink:/api/v1/namespaces/deployment-6389/pods/nginx-deployment-55fb7cb77f-zq592,UID:3072a104-b2bd-483a-a5c2-483fb9fe49f0,ResourceVersion:3057958,Generation:0,CreationTimestamp:2020-08-27 00:37:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1de127fd-6363-4af9-86c4-1f9a8908134b 0x4003c74330 0x4003c74331}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-47hqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-47hqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-47hqx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4003c743b0} {node.kubernetes.io/unreachable Exists  NoExecute 0x4003c743d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:09 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-27 00:37:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 27 00:37:20.474: INFO: Pod "nginx-deployment-7b8c6f4498-4xjjp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4xjjp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6389,SelfLink:/api/v1/namespaces/deployment-6389/pods/nginx-deployment-7b8c6f4498-4xjjp,UID:fa11897c-a886-4f7b-b9bc-68c8ea6d4b7b,ResourceVersion:3058041,Generation:0,CreationTimestamp:2020-08-27 00:37:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f090411e-d2bf-42a2-8daf-e318a83a4eb5 0x4003c744a0 0x4003c744a1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-47hqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-47hqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-47hqx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4003c74510} {node.kubernetes.io/unreachable Exists  NoExecute 0x4003c74530}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:14 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-27 00:37:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 27 00:37:20.475: INFO: Pod "nginx-deployment-7b8c6f4498-552qx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-552qx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6389,SelfLink:/api/v1/namespaces/deployment-6389/pods/nginx-deployment-7b8c6f4498-552qx,UID:928300dc-0ada-48d7-be1e-e0b0b7752d8e,ResourceVersion:3058029,Generation:0,CreationTimestamp:2020-08-27 00:37:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f090411e-d2bf-42a2-8daf-e318a83a4eb5 0x4003c745f7 0x4003c745f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-47hqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-47hqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-47hqx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4003c74670} {node.kubernetes.io/unreachable Exists  NoExecute 0x4003c74690}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:14 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-27 00:37:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 27 00:37:20.475: INFO: Pod "nginx-deployment-7b8c6f4498-5kdxc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5kdxc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6389,SelfLink:/api/v1/namespaces/deployment-6389/pods/nginx-deployment-7b8c6f4498-5kdxc,UID:789ba2be-c538-477e-9f43-932fc20356e0,ResourceVersion:3058009,Generation:0,CreationTimestamp:2020-08-27 00:37:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f090411e-d2bf-42a2-8daf-e318a83a4eb5 0x4003c74757 0x4003c74758}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-47hqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-47hqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-47hqx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4003c747d0} {node.kubernetes.io/unreachable Exists  NoExecute 0x4003c747f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:14 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 27 00:37:20.476: INFO: Pod "nginx-deployment-7b8c6f4498-7p7rn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7p7rn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6389,SelfLink:/api/v1/namespaces/deployment-6389/pods/nginx-deployment-7b8c6f4498-7p7rn,UID:8e9a20de-0478-46eb-a5d2-3eb060e455ef,ResourceVersion:3058054,Generation:0,CreationTimestamp:2020-08-27 00:37:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f090411e-d2bf-42a2-8daf-e318a83a4eb5 0x4003c74877 0x4003c74878}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-47hqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-47hqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-47hqx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4003c748f0} {node.kubernetes.io/unreachable Exists  NoExecute 0x4003c74910}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:14 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-27 00:37:15 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 27 00:37:20.477: INFO: Pod "nginx-deployment-7b8c6f4498-b2f7z" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-b2f7z,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6389,SelfLink:/api/v1/namespaces/deployment-6389/pods/nginx-deployment-7b8c6f4498-b2f7z,UID:325a63f6-8253-476a-8b16-12e1395f09d3,ResourceVersion:3058070,Generation:0,CreationTimestamp:2020-08-27 00:37:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f090411e-d2bf-42a2-8daf-e318a83a4eb5 0x4003c749d7 0x4003c749d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-47hqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-47hqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-47hqx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4003c74a50} {node.kubernetes.io/unreachable Exists  NoExecute 0x4003c74a70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:14 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-27 00:37:15 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 27 00:37:20.478: INFO: Pod "nginx-deployment-7b8c6f4498-cqspr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-cqspr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6389,SelfLink:/api/v1/namespaces/deployment-6389/pods/nginx-deployment-7b8c6f4498-cqspr,UID:affc2717-2fcf-4233-8810-c8a9c248799c,ResourceVersion:3058011,Generation:0,CreationTimestamp:2020-08-27 00:37:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f090411e-d2bf-42a2-8daf-e318a83a4eb5 0x4003c74b37 0x4003c74b38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-47hqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-47hqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-47hqx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4003c74bb0} {node.kubernetes.io/unreachable Exists  NoExecute 0x4003c74bd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:14 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 27 00:37:20.479: INFO: Pod "nginx-deployment-7b8c6f4498-dvkrs" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dvkrs,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6389,SelfLink:/api/v1/namespaces/deployment-6389/pods/nginx-deployment-7b8c6f4498-dvkrs,UID:736809bc-2995-449a-85fa-fe91676bd714,ResourceVersion:3058017,Generation:0,CreationTimestamp:2020-08-27 00:37:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f090411e-d2bf-42a2-8daf-e318a83a4eb5 0x4003c74c57 0x4003c74c58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-47hqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-47hqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-47hqx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4003c74cd0} {node.kubernetes.io/unreachable Exists  NoExecute 0x4003c74cf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:14 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 27 00:37:20.480: INFO: Pod "nginx-deployment-7b8c6f4498-f2w22" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-f2w22,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6389,SelfLink:/api/v1/namespaces/deployment-6389/pods/nginx-deployment-7b8c6f4498-f2w22,UID:21b6f2ab-d3f5-473d-b1c9-c5cd534dc3fc,ResourceVersion:3057879,Generation:0,CreationTimestamp:2020-08-27 00:36:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f090411e-d2bf-42a2-8daf-e318a83a4eb5 0x4003c74d77 0x4003c74d78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-47hqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-47hqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-47hqx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4003c74df0} {node.kubernetes.io/unreachable Exists  NoExecute 0x4003c74e10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:36:48 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:05 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:36:48 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.166,StartTime:2020-08-27 00:36:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-27 00:37:04 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://62ef05ecae47cb341922126f3f54e5d5a81f9b16a574573d1a11eeaea8651dbf}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 27 00:37:20.481: INFO: Pod "nginx-deployment-7b8c6f4498-f9wkp" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-f9wkp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6389,SelfLink:/api/v1/namespaces/deployment-6389/pods/nginx-deployment-7b8c6f4498-f9wkp,UID:1110d8cb-3cb2-4fd8-a374-b7f7930c3d74,ResourceVersion:3057868,Generation:0,CreationTimestamp:2020-08-27 00:36:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f090411e-d2bf-42a2-8daf-e318a83a4eb5 0x4003c74ee7 0x4003c74ee8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-47hqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-47hqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-47hqx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4003c74f60} {node.kubernetes.io/unreachable Exists  NoExecute 0x4003c74f80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:36:49 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:04 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:04 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:36:48 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.164,StartTime:2020-08-27 00:36:49 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-27 00:37:03 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://ff216a44dd6b079e163d0b625f743cfe33965d923adeaed76b93be9a6669cccc}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 27 00:37:20.482: INFO: Pod "nginx-deployment-7b8c6f4498-grkx8" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-grkx8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6389,SelfLink:/api/v1/namespaces/deployment-6389/pods/nginx-deployment-7b8c6f4498-grkx8,UID:ca5b1ff6-4491-4b0c-b64a-53fcdc30bd13,ResourceVersion:3057891,Generation:0,CreationTimestamp:2020-08-27 00:36:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f090411e-d2bf-42a2-8daf-e318a83a4eb5 0x4003c75057 0x4003c75058}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-47hqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-47hqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-47hqx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4003c750d0} {node.kubernetes.io/unreachable Exists  NoExecute 0x4003c750f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:36:51 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:06 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:36:48 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.232,StartTime:2020-08-27 00:36:51 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-27 00:37:05 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://f93bee401e3415a04ec2c40460c88019169b35530f5bac5ac713d7be39395b2e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 27 00:37:20.483: INFO: Pod "nginx-deployment-7b8c6f4498-gvwfx" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gvwfx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6389,SelfLink:/api/v1/namespaces/deployment-6389/pods/nginx-deployment-7b8c6f4498-gvwfx,UID:40e8c91f-4dc3-439b-9386-abd39f8c134e,ResourceVersion:3057867,Generation:0,CreationTimestamp:2020-08-27 00:36:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f090411e-d2bf-42a2-8daf-e318a83a4eb5 0x4003c751c7 0x4003c751c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-47hqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-47hqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-47hqx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4003c75240} {node.kubernetes.io/unreachable Exists  NoExecute 0x4003c75260}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:36:48 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:04 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:04 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:36:48 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.230,StartTime:2020-08-27 00:36:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-27 00:37:03 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://6780a45076c8ad2e00b780e689baa7d5f6325ce76ac631fb064477146ff2e219}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 27 00:37:20.484: INFO: Pod "nginx-deployment-7b8c6f4498-hgdrg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hgdrg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6389,SelfLink:/api/v1/namespaces/deployment-6389/pods/nginx-deployment-7b8c6f4498-hgdrg,UID:e36ad3e6-1511-4c19-b582-9fec87cf5c41,ResourceVersion:3058013,Generation:0,CreationTimestamp:2020-08-27 00:37:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f090411e-d2bf-42a2-8daf-e318a83a4eb5 0x4003c75337 0x4003c75338}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-47hqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-47hqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-47hqx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4003c753b0} {node.kubernetes.io/unreachable Exists  NoExecute 0x4003c753d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:14 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 27 00:37:20.485: INFO: Pod "nginx-deployment-7b8c6f4498-jx9mm" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jx9mm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6389,SelfLink:/api/v1/namespaces/deployment-6389/pods/nginx-deployment-7b8c6f4498-jx9mm,UID:f41bbe78-5177-4b81-9b9f-2b2192077d91,ResourceVersion:3057876,Generation:0,CreationTimestamp:2020-08-27 00:36:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f090411e-d2bf-42a2-8daf-e318a83a4eb5 0x4003c75457 0x4003c75458}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-47hqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-47hqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-47hqx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4003c754d0} {node.kubernetes.io/unreachable Exists  NoExecute 0x4003c754f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:36:49 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:04 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:04 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:36:48 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.229,StartTime:2020-08-27 00:36:49 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-27 00:37:03 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://fd32c946ca8afec3524061f14de68dc6a3dedb3f24b16c4c53e1f89619cffb74}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 27 00:37:20.486: INFO: Pod "nginx-deployment-7b8c6f4498-nhnq8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nhnq8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6389,SelfLink:/api/v1/namespaces/deployment-6389/pods/nginx-deployment-7b8c6f4498-nhnq8,UID:566621bb-e35b-49e3-a344-93092aaf4b2c,ResourceVersion:3058005,Generation:0,CreationTimestamp:2020-08-27 00:37:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f090411e-d2bf-42a2-8daf-e318a83a4eb5 0x4003c755c7 0x4003c755c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-47hqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-47hqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-47hqx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4003c75640} {node.kubernetes.io/unreachable Exists  NoExecute 0x4003c75660}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:14 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 27 00:37:20.486: INFO: Pod "nginx-deployment-7b8c6f4498-nlg2h" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nlg2h,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6389,SelfLink:/api/v1/namespaces/deployment-6389/pods/nginx-deployment-7b8c6f4498-nlg2h,UID:5b245756-7288-4399-8afd-5d5edc47e213,ResourceVersion:3057863,Generation:0,CreationTimestamp:2020-08-27 00:36:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f090411e-d2bf-42a2-8daf-e318a83a4eb5 0x4003c756e7 0x4003c756e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-47hqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-47hqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-47hqx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4003c75760} {node.kubernetes.io/unreachable Exists  NoExecute 0x4003c75780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:36:48 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:04 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:04 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:36:48 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.163,StartTime:2020-08-27 00:36:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-27 00:37:03 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://ee4c853c454071f03718481291ba3c4e94eacd1a9b90364fb119d30c6abae90b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 27 00:37:20.487: INFO: Pod "nginx-deployment-7b8c6f4498-q7l6n" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-q7l6n,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6389,SelfLink:/api/v1/namespaces/deployment-6389/pods/nginx-deployment-7b8c6f4498-q7l6n,UID:5e59ce74-7bef-4f9d-b32f-503b63a749ff,ResourceVersion:3058008,Generation:0,CreationTimestamp:2020-08-27 00:37:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f090411e-d2bf-42a2-8daf-e318a83a4eb5 0x4003c75857 0x4003c75858}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-47hqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-47hqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-47hqx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4003c758d0} {node.kubernetes.io/unreachable Exists  NoExecute 0x4003c758f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:14 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 27 00:37:20.488: INFO: Pod "nginx-deployment-7b8c6f4498-sk9c5" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-sk9c5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6389,SelfLink:/api/v1/namespaces/deployment-6389/pods/nginx-deployment-7b8c6f4498-sk9c5,UID:a28aa2bb-a310-44bd-8bae-e9cd7860857e,ResourceVersion:3057896,Generation:0,CreationTimestamp:2020-08-27 00:36:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f090411e-d2bf-42a2-8daf-e318a83a4eb5 0x4003c75977 0x4003c75978}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-47hqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-47hqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-47hqx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4003c759f0} {node.kubernetes.io/unreachable Exists  NoExecute 0x4003c75a10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:36:50 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:06 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:36:48 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.167,StartTime:2020-08-27 00:36:50 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-27 00:37:05 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://8d737366b41f087c3152afba748d12296e5512431f22de2eceeee0bcf7dfce58}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 27 00:37:20.489: INFO: Pod "nginx-deployment-7b8c6f4498-tvk4g" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tvk4g,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6389,SelfLink:/api/v1/namespaces/deployment-6389/pods/nginx-deployment-7b8c6f4498-tvk4g,UID:217bdc2e-fe96-4cb1-b851-8c8dfddc4d89,ResourceVersion:3057880,Generation:0,CreationTimestamp:2020-08-27 00:36:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f090411e-d2bf-42a2-8daf-e318a83a4eb5 0x4003c75ae7 0x4003c75ae8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-47hqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-47hqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-47hqx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4003c75b60} {node.kubernetes.io/unreachable Exists  NoExecute 0x4003c75b80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:36:48 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:04 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:04 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:36:48 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.228,StartTime:2020-08-27 00:36:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-27 00:37:03 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://01e06ceccec1a58c14273e6f27dae22775318f4e398af8eb88d2263f6ab4e1de}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 27 00:37:20.490: INFO: Pod "nginx-deployment-7b8c6f4498-txth9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-txth9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6389,SelfLink:/api/v1/namespaces/deployment-6389/pods/nginx-deployment-7b8c6f4498-txth9,UID:b09ac09e-b646-4484-bed7-8ce77b8919d3,ResourceVersion:3058072,Generation:0,CreationTimestamp:2020-08-27 00:37:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f090411e-d2bf-42a2-8daf-e318a83a4eb5 0x4003c75c57 0x4003c75c58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-47hqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-47hqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-47hqx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4003c75cd0} {node.kubernetes.io/unreachable Exists  NoExecute 0x4003c75cf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:14 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-27 00:37:15 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 27 00:37:20.491: INFO: Pod "nginx-deployment-7b8c6f4498-xc2gt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xc2gt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6389,SelfLink:/api/v1/namespaces/deployment-6389/pods/nginx-deployment-7b8c6f4498-xc2gt,UID:9603d1a0-2c00-41e2-af21-59bd2ddfeee3,ResourceVersion:3058028,Generation:0,CreationTimestamp:2020-08-27 00:37:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f090411e-d2bf-42a2-8daf-e318a83a4eb5 0x4003c75db7 0x4003c75db8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-47hqx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-47hqx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-47hqx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4003c75e30} {node.kubernetes.io/unreachable Exists  NoExecute 0x4003c75e50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 00:37:13 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-27 00:37:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:37:20.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6389" for this suite.
Aug 27 00:38:18.225: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:38:18.356: INFO: namespace deployment-6389 deletion completed in 56.894351534s

• [SLOW TEST:90.610 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 27 00:38:18.357: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-770
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-770 to expose endpoints map[]
Aug 27 00:38:19.075: INFO: successfully validated that service endpoint-test2 in namespace services-770 exposes endpoints map[] (119.263834ms elapsed)
STEP: Creating pod pod1 in namespace services-770
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-770 to expose endpoints map[pod1:[80]]
Aug 27 00:38:24.578: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (5.28518113s elapsed, will retry)
Aug 27 00:38:26.765: INFO: successfully validated that service endpoint-test2 in namespace services-770 exposes endpoints map[pod1:[80]] (7.472295847s elapsed)
STEP: Creating pod pod2 in namespace services-770
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-770 to expose endpoints map[pod1:[80] pod2:[80]]
Aug 27 00:38:32.274: INFO: Unexpected endpoints: found map[88d849b3-c475-4b86-a468-3a4be3ea750a:[80]], expected map[pod1:[80] pod2:[80]] (5.501206464s elapsed, will retry)
Aug 27 00:38:34.343: INFO: successfully validated that service endpoint-test2 in namespace services-770 exposes endpoints map[pod1:[80] pod2:[80]] (7.57042495s elapsed)
STEP: Deleting pod pod1 in namespace services-770
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-770 to expose endpoints map[pod2:[80]]
Aug 27 00:38:34.404: INFO: successfully validated that service endpoint-test2 in namespace services-770 exposes endpoints map[pod2:[80]] (53.899244ms elapsed)
STEP: Deleting pod pod2 in namespace services-770
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-770 to expose endpoints map[]
Aug 27 00:38:34.424: INFO: successfully validated that service endpoint-test2 in namespace services-770 exposes endpoints map[] (14.947465ms elapsed)
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 27 00:38:34.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-770" for this suite.
Aug 27 00:38:56.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 00:38:57.062: INFO: namespace services-770 deletion completed in 22.550792275s
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:38.705 seconds]
[sig-network] Services
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSAug 27 00:38:57.063: INFO: Running AfterSuite actions on all nodes
Aug 27 00:38:57.064: INFO: Running AfterSuite actions on node 1
Aug 27 00:38:57.065: INFO: Skipping dumping logs from cluster

Ran 215 of 4413 Specs in 7527.651 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4198 Skipped
PASS